doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/54765 (DOI)
Hello, we are live. Welcome back. This is the track three of the Plum Conference 2020. Now with us, we have Fuvio Casali. He's doing archeology of Plum. And this is the results he gathered so far with you. Fuvio. Thank you, Erika. OK, I want to start with a poll. Who is the handsome young man on the left? Can you do you recognize him? Obviously, on the right, we see Steve McMahon. But this handsome young young fellow on the left. Can you tell me who that is? And I have a hint for you on the next slide. And I'm going to try and go to Loudsform so I can see the answers to my poll in Slido. OK, let's see. Can somebody type an answer? Please. I don't see any answers. It is there. And they got it right. Ah, OK, so yeah, there's a delay. Right. OK, very good. All right. OK, kids, gather around. I have a story to tell you. My name is Fuvio Casali. I am full of on GitHub. And I'm not really on social media since for a few years. And I have survived, which really makes me a bad candidate for being on the marketing team, honestly. Because nowadays, it's all about dopamine. But the relevant parts of my biography that are relevant for this talk are that I started using and developing for Plone at Web Collective in Seattle in 2008 until Web Collective closed. And then I went solo as a Plone consultant basically since 2012. And also, originally, I was a physicist in my early days, which is why I treated this whole project as an experiment, as you will see. But for today, I want to be your historian. And this is the experiment. So I've been somewhat active in the marketing team this year. And as is probably obvious to everyone, Plone.org is something that falls under the jurisdiction of the marketing team. And it needs attention, for sure, which will probably result in a relaunch next year sometime. But there is a lot of content. In fact, as we started this project, I was shocked at how much content I did not know there is on there. Like, for example, you can go back and find almost anything you want to know about old conferences. It is inconsistent. Some years, there's more. Some years, there's less. But the amount of information that is on there is amazing. But for the purposes of an achievable task that we wanted to take on, we decided to just start cleaning up our news items. Just keep it smart, measurable achievable, and so on. And just focus on tagging. Start with tagging. Go through the content and tag it. That way, we will know more about what there is and what else needs to be done as a second project. So first, we decided on a controlled vocabulary of tags that we wanted to apply so that we wouldn't just all start making up random tags that quickly become unmanageable and inconsistent. And then we just split up the workload. So everybody got three or four years to work through. So you would go through and look at each news item one at a time and tag it according to this controlled vocabulary that we decided on. And I want to say before I even move on, if anybody, as a result, maybe of this talk or just in the past, noticed this, the news items that we have, especially the old ones, do not look pretty. And part of it is the result of the fact that plon.org was upgraded many, many times over the years. And so unfortunately, some of the formatting got lost. And we never went back and cleaned it up after the fact. And so it's how it is. But it also reflects a little bit of the dichotomy between what there is on a website and how you want to manage it if you want to basically put your best face forward versus, on the other hand, if you want to just treat it as a record of everything you have done. And I, for one, I am eternally grateful to all of the teams that came before us who did migrate and did keep the content that we do have. Because I think for a community website, it is very important that we keep records. Even if it doesn't present plon as a fancy company with just looking very, very professional, I think keeping the information is the most important thing. All right, so we strapped on our hard hats. And I think this is Rekupeka. We descended down deep into the pits of plon.org and the news items started digging until we found a treasure. And this is a representation of the treasure that we have. And it's a classic example of a pretty picture that looks scientific but really doesn't mean anything. Because, OK, yes, I mean, you can see how many news items were published each year. And there are trends, but the trends really don't matter. The overall number is what matters. We have 855 news items. Part of the reason why this doesn't represent anything is because editorial standards changed over the years. For example, in early plon.org versions, self-registration was open. And lots of different people posted news items on the wildest topics. Obviously, there must have been spam as well that got cleaned out, so there isn't any spam. But some of the content is weird. Also, in some years, we had news digests. In 2009, 2010, Mark Corum did news digest. In 14 and 15, Christina McNeill, special shout out to Christina McNeill. She did an incredible job with those news digests. The information that she collected was beautiful. And you can really go back and look at that. Also, in 2014, we started our discourse instance on community.plon.org. So maybe at some point after 2014, a lot of announcements were not put on plon.org anymore. Like I said, standards changed over the years and over the rotation of the people in charge of just this aspect of plon.org. Also, some years, the conference was promoted very aggressively. Like in the months and weeks before, there were news items almost every other day. And they were very repetitive. There was really almost no new information. It was just an attempt to drum up the enthusiasm and the excitement and get people to register. Anyway, so we have lots of data, lots of information. And it will be interesting to keep actually digging through it. And I'm highly interested to work on this. And this picture is for a couple of disclaimers. The maps of our continents are dependent on the levels of the sea, the level of the sea. If, for any reason, the ocean levels were 2000, 3000 meters higher than they are now, well, way beyond any projections that we can have for climate change, just hypothetically, basically all you would have left on a map of the world are just the mountain ranges. And think of the level of the sea as a metaphor for our level of attention. We can choose where we want to put our attention. And in this metaphor, the news items are the peaks, because there are only a few dozen news items in a year. But if you wanted to lower your level of attention, say, for example, to the posts in discussion forums, or to posts on social media, or to commits or issues, the activity in GitHub, for example, we would have a lot more data and a much more complex picture of the map of the land, so to speak. But still, it's still worthwhile to look at the mountain peaks as just knowing where they are, what orientation they are laid out, and so on. So news items do not represent our whole history, of course. But still, they're interesting. They do tell us the story. This is my personal view of the stages of development of PLONE that I based on this experiment, on the data that I found. I would like to call the early days, the years between 2001 and 2005, the middle days, the years between 2006 and 2014, and the latter days and the rebirth, the years between 2015 and the present. So the early days, obviously, were strongly influenced by our founders, Alan Runyon and Alex Limie. What I learned from the news items is that the first known screenshot of an early work in progress, PLONE, was August 12 of 2001, and we found to find it. We don't actually have it. PLONE 1.00 was released in early 2003. The final release candidates were in December of 2002. The first PLONE conference was in 2003 in New Orleans. And the amazing thing for me, because I've kind of had this idea that the PLONE community and the whole PLONE verse sort of grew, would have grown linearly. But that's not the case at all. It really exploded in the early days. If you think about the fact that the first PLONE conference had 150 people. I mean, this year, this week, we have a very large number, but that's partly because we are online, so it is much easier for people to come. And I think 150 would have been, we would have considered it a success this year if we had had the conference in midspace. Down below, you see some milestones. The first thousands CVS commit was on May 24. Downloads, well, that's a pretty arbitrary measure. But still, I mean, the number of active developers that I found from early news items was in the hundreds already in these early years. I don't remember. No, I don't have it. Actually, my notes are a little bit scattered. But trust me, we had a lot of activity in terms of also just news exposure, conference exposure, articles that were printed, interviews. It was, they were really heady days. The middle days, what I would hear, I call the climax, is where basically, Plone consolidated. So we had worldwide adoption that kept growing pretty fast or steadily, even though the first four years were an explosion of worldwide adoption already. We had a lot of policy activity at the level of the Plone Foundation. The middle days were the days where all the books were published. We got a lot of awards. Google Summer of Code dates back to 2006. User groups, user groups shot up in the early days, but also kept growing in the middle days. And that's also when multi-stakeholder initiatives, PloneGov and collaboration with organizations representing municipal bodies, like city governments or regional governments, PloneGov and PloneEDU, those all happened in this period. And the books, who remembers these books? In fact, I have another poll. Let's see if I can activate this poll. No. No. Yes. OK. Please select your favorite Plonebook from the list. And my personal favorite, one of my favorites was the Web Component Development by Agash was from, gosh, now I forget. I know the complex names. Yeah. Asperly's book was obviously one of the most influential, probably for a lot of people, myself included. But also, let me go back. 2011 was actually the last year a Plonebook was published. The first one in the 2011 column actually was published in 2010, but it would have messed up my layout so I put it there close enough. So we got a lot of awards. And as you can see, we got a lot of awards even in the early days, 2004 and 2005. But most of them we got in these middle years. And some of them were awarded to Plone, but some also to initiatives like Plonegov, Ploneedu, and some to specific Plone companies or for the merits of specific websites launched by them. But we still, we still like to take credit for them as a community. We had lots of articles in the news. This is all very incomplete because, again, our news items are pretty inconsistent in this way. So it's not easy to actually extract this information. But there are several that we know for sure. And others we can only speculate. So we've been to a lot of conferences even in 2002 before the first Plone version of Plone 1 was released. In terms of policy work, the Plone Foundation was established in 2004. The Plone trademark was finally finalized in 2007 and handed over to the Plone Foundation. We had a Plone diversity statement since 2010 and an anti-harassment policy adopted by the first Plone conference in 2011. And the work of the Plone Foundation has been ongoing. There would be more policy achievements since then. Obviously, we know how strong Plone's adoption was and still is at the government level. And also, there were a lot of user groups that got established in many, many countries around the world. And here's another pop quiz. Here is the invite for the first San Francisco Bay Area meetup in November of 2005. And the poll, the question is, who wrote this announcement? This cracks me up. This is pretty typical, I think, for one of these characters. And I think I know the answer, but I'm not actually 100% sure. I wish the answer were spanky. I think it was a little bit more complicated. I thought initially I thought it was spanky. But I think, really, it is Rob Miller. At least that's the alias I had to actually Google and found that the user that logged in and posted this news item is still in use by a Rob Miller on GitHub. So it's probably him. But it could have been written by spanking, for all we know. And here is another precious, precious gem. In 2005, there was a sprint in San Jose. And at the end, they gave out awards. And sorry if the font is small, if you can't read it. But I'll read a couple of these that just crack me up. Most irresponsible use of CVS head award, second best looking team coach award, most efficient at converting test errors into failures award, honorary title of title of plumb pimp award, least likely to be caught without a sound system award, most likely to be flipped off while taking a picture award, the best, my god, this is how you develop software look award. It's just this beautiful best use of triple in the same award. So everybody do the plon walk at Plon Symposium East in 2009. And so we come to the latter days. And we're still going strong. Sprints are still going strong. Conferences are still going strong. Training, who needs books really? We have such wonderful training, written material. The foundation is still working higher. And Volta really is kind of experiencing the same growth, the same explosive growth, in my opinion, that's plon had in its early days. So let's focus on that. Now, there are some watershed divide moments. And if you walk along the red line here in the middle, where it says continental divide and it's raining, basically any raindrop that falls to your right, if you're walking north, is going to eventually flow down into the Gulf of Mexico or into the Atlantic. And every drop that falls on your left is going to flow into the Pacific Ocean. And that's a metaphor for a few moments that stood out for me as I went through this material. This is one that, to me, is really representative of what the character of this community is. This is the end point of a long process, which started with the formation of the Plone Foundation, or really, probably, it was actually born in the heads of Alex and Alan, when they decided early on that they wanted to hand this over to the community. But in 2007, this actually happened when the company known as Plone Solution basically gave up this name and took on the name Yarn, because they wanted to create a level playing field for everybody else. I think this was very influential on everybody, every member of the community, knowing this is what we are about. In 2007, the trademark was finally taken over by Plone, the foundation. And this is the other watershed moment, where I remember this day, it was early, it was in 2011, and it was one of my early days. And I didn't know many people. I hadn't even ever heard of Dona y les Cremea, but I saw this outpouring of just grief and love. And it made me realize what this community is made of. And I just want to say to everybody here, no matter who you are, some people may not be very wildly known, but no matter who you are, somebody here sincerely loves you and wants nothing but the best for you. And this is true. So please write your watershed moment, if there are any, for you to find the Plone community in your years. And here are some conclusions. OK, I'll just jump to the last one. In the early days, Plone.org was upgraded as part of the release of every Plone version. And I think we should go back to this. This is, it would be difficult, but I think it's an important challenge. More wider conclusions. We cannot compare ourselves to our, you know, ourselves in the past. We are in a different time now. We have to look ahead, keep what works, sprints, conferences, and the foundation, support the people, companies, and clients we have, do not make, Philip the other day said, every line of code that is not written is a good line of code. And I think we should expand it to every code that we do not force our clients or our developer, our developer, our consultants, or our Plone companies to write is a good line. And that means we do not break compatibility, backward compatibility. Do not make more work for people than necessary. And invest in the Greatest Promise, which at this time, I think, is Volta. Thank you. And I have one last poll. All right. And I will jump into this.
A brief compendium of Plone's history. I've read hundreds of plone.org news items, so you don't have to. In the process, I discovered lots of precious nuggets of our collective lore, some evergreen, some almost forgotten, sure to evoke warm fuzzy memories for the grizzled veterans among us.
10.5446/54766 (DOI)
Thanks everybody for coming to this third session. I'm here to, I'm glad to introduce Jens Klein from Austria. And he will talk to us about performance profiling and power consumption. Go ahead, thanks. Yeah, hello. My name is Jens Klein from Austria. And so I want to talk about this topic. And as you know, you might get a call from a customer saying, Hey, hello, my site is slow. So what to do about it? And, well, that's the thing, blown slow. And this is about performance. And then you need to ask questions. And this is what happens often to me at some people I never heard about calling a platform site and it's low. I got your name from somewhere in the internet. Then you can ask questions like, well, it's a specific page. Slow is it on high load slow or massive rights or overall and yes, you start asking more questions like, did you check your hardware database network? And I say, I checked everything. Yeah, really check first. But then at some point you see, oh, it is blown. So, blown is slow. And now we have to ask questions again. Or we have to look at the instance and inspect it. And several possible paths to go so often the other seems bloated and can be slow in certain situations. So server side code can be slow or even Volto code can be slow. And then you can cash things. So the back end does not need to deliver this but at some point, caching is not enough. And the next, the next time then you mostly got the pass down to something happens in my Python logic. And this is what I want to talk about about the focus on the Python logic, not about caching, not about database optimization or something like that. And there you need a way to find the performance program. So you need tools. And mainly, these three tools, primary py spy, because it's a runtime profiler, then for specific page loads and so on the repose profile and the disassembler from the Python. So what is py spy? It's a great tool than overall. It has a top like output or you can also store the output for later analyzing. But the top like output is really great because you can plug into a live application and look at the application what's happening there in a live process. And there is repose profile. It's a classical Python profiler. This slows down the side opposed to py spy, which just looks at the running process. The profiler really gets in your way and slows down the application. It's the middle where it works with a new Python five to Python with a risky implementation. And you can profile a single request by really and that does not work in life instance but if you copy the database and everything, you can have a profile in the single request and look what really happens there. Then I'm missing one slide anyway. And then you have the disk disassembler for Python, which is used to look at one function on the bytecode level. This is something I rarely use, but it's very powerful to get an insight on what happens on bytecode level. And if you have functions that are called very, very often and this happens, then you can analyze these functions with this. So with these tools, we've, I found some or we found some improvements to clone. So in the blown 520 to 5123. There was some bottlenecks primary and blown dexterity. We to avoid an early provided by call which is expensive and was called very, very often. There were possibility to extend the list of early exits on common attribute calls because attribute to get at attribute calls indexed here are called very often. We optimized cash is the signable cash schema cash. And there was some leftover we found that there's a threat local synchronization was built in for an ancient caching that was refactored but as thanks as a synchronization stuff was kept in and this is very slow as well. Then there were some fancy things happened in soap interface. I never get my fingers dirty in soap interface before but this point I found that this was a place where things are really slowed down and so interface and we in clone use the component architecture very, very often and any in every place. And if there's something slow or optimize able. This hasn't big impact because it's called so often like I didn't reindex index on a midsize database and this was called like hundred 20,000 times one function. And if you optimize something in there then it has an impact. And I found the leftover of a hash override that was then, which usually is a hash, the classical hash of an object is written in C. So if you override the stuff you have bytecode and that slows down things a lot and even removing this hash override, improve the speed of this domain. Then another hash improvement happened in the interface class itself, and found a way to have a five times in look up look up all in subscriptions which is called very, very often. And this was on a bytecode level analyzed I show it later. And also Jason, who supported and reviewed all the stuff as well and did some improvements and other places found a way to reduce the memory consumption of this of soap interface. So, I really can recommend updating to blown five to one of five to two is it in. Of course that really has an impact. So yeah, I say that's want to demo this a bit. And let's start here. If you have a blown site and want to show a pie spy a bit. Let's start an soap instance with blown. And here, I start pie spy I do this with pseudo because I connect to an existing process. And this is only possible pseudo. I grab here my instance, my my process ID my pit out of it, but you can just do a PS and look for your pit, or look at the pit file and get the pit out of it to connect to it. So, with that pie spy and I feel side with some news items in and zoom out a bit. So, and I call this. And reload the site. Then you see on the in this top like, can you help you can read it I can make it a bit bigger. You can see here on the top like output the own time of the function and the total time a function used. So the own time is what one call of the function would use and the total time is accumulated time to function with all subcalls. And you can switch here and the sorting just different times and see what's cold and how much it used if I reset this and reload the page. Because it wasn't warmed up I get different numbers. And you can see it took me to reload and all calls connected 1.34 seconds in total time and I've switched to on time. You see where the calls happens as a C profile itself that takes a lot of time. And then we see in blown registry record proxy 60 milliseconds. But and there are other calls in here that take some time. And if you want, and like query utility still takes its time. And we could now look into the functions to get an idea what's going on there. If you have a site was really a larger performance problems you can get a good feeling where your time stacks up and I get an idea where to look at. So, in the next thing I want to show is repose profile. I added this the repose profile to my setup. This works in a way that I have this in the build out CFG I can just enable it with profile is on. I use a special branch of the blown recipes up to instance. This is working progress just works it's very similar very simple change but I didn't write any test for it and I didn't match the branch. So in fact it's based on the master of blown recipes up to instance and just need some love. So if I enable this middleware. So, repose profile. Then, if I start the instance. I get this nice few here on the left. Let's look at it. And so, and this is a bit like the soap profiler, but as a whiskey, you can in this few you can sort your output and that's interesting on all time. It's a total time of the calls. I hope I have some data now in here. No, that's not good. So, let's first clear this. So, let's sort the side and now refresh and now I have up to date data. Like on the on the time used for the calls. No, it's total time. I can sort on the cumulative time, which then says all cumulative calls of this built in methods took this time. So, I can sort it on the number of calls, which is very interesting. We find that we have a method as a split is called very often, and that's a records proxy PY of clone registry is called very often and we may want to look at this and may get some better numbers out of it in optimizing the record proxy. And there are a lot of calls, but in fact, also interesting to analyze this is to look and have a different look on it. If you look at what the call is out of our functions, then we get an output that says a record proxy it was called in registry hundred. And then you can get four times and registry PY and so on. You really get an idea how your code is called how often things are called and why and from where it's called and if you need more information you can show the full directories, then you get the full pass of the files and can try and can directly starting to inspect stuff. So, and the last thing I want to show is this bytecode level thing. So, I start the instance in the debug mode. And I took a clone 5 to all with our problem on bytecode level, which is called very often. And I try to analyze why the function is slow or what could be optimized on a micro level. So, it was what I found with the with the repos profile. That is a underscore look up option from soap interface adapter is called very often is not that fast. So I imported. Next I import the Python disassembler and start to look at it on a bytecode level. So, here we go. You have it on a loaded on a bytecode level. And you see the whole function, but without looking at the functions align numbers. Not very interesting so let's look at this functions. So, this is a look up function I expected and this is the version used in blown five to oh, and in this look up function, we see here's a loop and another loop and inside and the loop is that called very often and look up is called very often and it iterates over the components and the look up is called recursively and this is called very often. So, it iterates a tree and looks up components. And the first look, this is like what can I, what can I do here it's called like hundred 20,000 times in a reindex index or more often like million times in larger sites and so let's have a look. We see here in the in the line in the loop. It's the line. Let's go back here it's the line in one loop it's like 675667 3267 7667 9268 So, and if we look here in this in the code. We found in the 67. Three that we have a components load fast components and the load method get all the time and that repeats that repeats inside the other function block as well components and get. If we always have to load components and then load get before I can call it, I can eventually spare some time and make it faster by doing the load of components get before into a local variable before the loop starts. And so we did this, we have this components get here. And this is assigned before all this whole block and then call components gets here. And this eliminates this. And so we just need to have the go load the method and get back in here. So if I do the same. I don't five three would see that is eliminated but I think we don't have time for this. Sorry, but you can try it yourself. So, I use with things I found while looking in the code is looking also at request timings is that blown rest API has a lot of optimization potential. And it's a moment a bit difficult because it supports it down to blown for three and then you have a lot of code pass and tests are getting difficult to manage. So, probably this would need to get rid of blown of Python to support and so on. So, I don't have to ask, but one thing we in front five to the navigation was optimized but blown rest API does not use this optimized code pass and still use old code that's still there for backward compatibility reasons. So, more and more registry is called to often so here also optimizations possible. And then there's a different thing in the page templates page templates and chameleon expressions are called very, very often and Python expressions are very faster than usually Python expressions. So, Python expression is starting with Python and tells expression don't need a prefix. So, if you don't need to use a prefix, then it's a text expression and probably they are way slower. So, that's it. And if you use Python expressions, that's much better. So I recommend this and we should reflect the search price as a blown core code to use Python expressions. And even more introspection can be done probably also in chameleon there's some potential to optimize. Yeah, that's, I think the outcome. And if you if you look deeper and invest more time probably there are more things to do. Also this all says power consumption. Yeah, that's the idea behind also I think is to reduce the carbon footprint not all servers are running on green energy. And I think it's important and also a sink to think about to reduce the power consumption the data size that is transported over the web and so on, not only for the users experience but also for the environment. So my advice, yeah, start introspecting your performance not your performance so the performance of your application. And as usual, a lot of virtual hacks stay healthy. And no following questions can be maybe there were some questions at the slider. Otherwise, we can meet at Jitsi and talk a bit. And I'm happy to answer any questions. Thank you, Jens. And we had a request earlier to specify that Jitsi some people don't her. It's actually not clear at all what how to get to Jitsi. So if you scroll down on the page where you are in the loud form under the video window in the middle column there is a blue button, join face to face. That's the, that's the Jitsi channel where you can go and ask Jens question and chat among your amongst other people who have heard about this, this talk. So thanks again Jens. And we will see you here again in 10 minutes. Okay.
Plone 5.2.2 performance is way better than before. It has less power consumption and is greener now. But how did we found the bottlenecks? What were the action needed to got it faster? Where are our pain points? Can we improve more?
10.5446/54768 (DOI)
to you, Stefan. So hello, everybody, or good morning or good afternoon or wherever you are. For me, it's good afternoon. So my name is Stefan Antonelli, I've introduced a work for Pwnsons ages and feel free to contact me in the face to face afterwards or via Twitter or email or whatever you prefer, if you have questions or follow ups. Exactly. So what is this talk about? I would try to explain PloNsix seeming from scratch. This is basically how to create a seam for PloN. And since everything is bootstrap now, that's pretty easy. Finally, the previous talk by Peter already showed some of the new stuff. So I will keep this part a little bit short and talk only about the differences or the the the good things I want to bring out somehow. We built a seam from scratch. So there is no dependency to Barcelona at all. It's there is no much dependency, because we most of the stuff is bootstrap anyway. But in my example, I leave Barcelona out so you can have at least a clean package with almost no dependency to any other CSS starting whatever is in there. We also dropped the also so in this example, there is still no they also needed. And that makes the site a little bit faster as well. Everything is based on bootstrap. We already have heard that this afternoon. I guess when you follow that talk, you have already seen the one before. So I will not repeat or I hope I will not repeat everything. We decided to use bootstrap. It was not kind of decision right? It's just one of the major frameworks out there. Everybody knows it. Everybody knows how to use it. And it's that's why we said we want to have it. It's one of the most popular. It brings typography, it brings forms, buttons, tables, everything you need for a framework like clone and everything you need for a website or for application or whatever you believe in. And you can use it as a version or whatever you build on top of clone or with clone. So let's create the stuff basically. Start work. I have to say Peter showed an example of doing all that. And I made that a little bit shorter. And after that, lovely talk about kittens I only can lose. So please don't expect too much from me. First step is creating an empty folder. There is also a clone CLI. Check out the docs for how to doing that. Basically one liner for creating an add on package. That's done like that in the console creates the package. You have to answer some questions. I filled that up here. I only showed it because we keep clone version five to one for that now. And we switch it later to the core of build out. So that's just there is as far as I know there is no bobtemplate for clone six so far. Correct me afterwards or give me a hint if there is already. So defaults are fine. Keep five two. And after that I made the first commit so you can follow up the changes. I also made a package. Again, it's not so pink. You can check it out on collective. Plonzy Munich is the name. I followed the I liked the idea of giving clones in different location names. So my first one or the first one was Tokyo. This one is Munich and Munich should be it's the same like Barcelona. It's not full featured but it supports all the features and it just makes the stuff a little simpler or does not support all features that you see in clone. That's way more easy to see. So bootstrap not the framework bootstrap singing. Creative VM install the requirements. It's done so far. After that you can run a build out but you don't have to. I recommend to clean up first a little bit the package or my idea or I like clean packages with with a few files I really need. That's why I remove what's not necessary. I remove all the test stuff the constraints for four and five. Bob template package brings a lot of stuff and I basically delete them. Check the disc between the it's only five commits in the repository. Check the difference is what I really deleted or what I changed for that. We have to extend this package a little bit. As I said before it's a core dev build out and we use the LTS branches from different packages. Peter brought up some conflicts. They are used in our core dev build out in the plug config. You can in your custom project you can check out or you can use that as a dependency extend your build out configuration and you get a core dev build out with what you need to test that classic theme stuff without having the core dev build out create a new package and use that so far. That's what we use for the LTS branches basically. Then run the build out. After that start the instance. I was too far a little bit. Run the build out. It's basically a core dev build out. There's just one thing more that I want to say. It will take a while especially when you're running the first time especially when you don't have all the packages on your system. Give it a while it will take some time. After that start the instance. You can start the instance. You don't have to. You get a plain vanilla clone 6 up and running with that structure. Start it and check it out if you can see clone with what we showed in the first talk then you are right so far. You can start. Beginning from here we add our theme structure. That means there is a pop template that does everything for you. You can use a theme pop template and it brings in everything. I prefer to add what I really need. That's why I add the structure manually and I only create or add the stuff that I really need. This is pretty much copy and paste because I have projects where I already have it in. For this use case I think this clone theme Munich is a good example. We can create a branch for this state of the theme that you see what you only need or what you really need for the theme structure. I guess we do a pop template for that but that's not something I went to promise. The first thing we have to do is add a package JSON. It's in the side root of the package. It's the same in the boss of the native theme. This is this npm JavaScript story. You don't want to talk about too much. Two things are interesting here. The dependencies to bootstrap alpha 3 in this example. We are on the beta now. We'll change that quickly. More interesting is the scripts section where you basically see what you can type in on your console. We just package JSON in the root. This dependency you see the scripts that we can run later. For example this creating the distribution CSS JavaScript and watch. That's what you see in Peter's talk as well. It's a watch demon that runs this machine when you change one single line in a SAS file or in a JavaScript file. We have to put some static files in. There is it's blown. It's a config.cml. I guess you know what's happening here. Interesting thing in the manifest file from the theme folder. It's almost empty. It's only six lines here. The interesting thing rules is empty. With having no rules defined, Diaso is just doing nothing. I love it because it's pretty fast. It's not modifying whatever comes from my main template. It ships the main template without processing basically. That's what happens when you leave the rules. Unluckily there is no switch to turn Diaso off. Something we can maybe discuss later. What we here see is the actual bundle. Yes, we use the resource registry and we have to register a bundle. What we do here is switch the compile flag to off. We register one CSS file. That is the result of our file system compiled from bootstrap and stuff. Here everything comes together and this registers the stuff in the code. We will compile the bundle in a minute. That's a few slides later. At least at a seam. That's also a no-brainer. Just copy and paste the stuff. Again, check the package for the details. I'm not showing everything but it's not much more than I discussed about that I show. Compile. Are there any expectations now? Yeah, sorry, wrong note. Compile is we compile a SAS to CSS. SAS is what comes with bootstrap. You can extend it by adding your own SAS files. Basically we import everything from bootstrap and that's the compile process. We use npm or in my case url to compile the stuff. Everything is tied together in the package. I already mentioned. For example, the bootstrap icons are listed there as dependency. We don't have to download or whatever. They just go to the node modules folder and are used there. Compiling means it resolves variables, packs the stuff, ugly files, minify the stuff, whatever is defined in your package. Again, check the scripts in the section for seeing what exactly is available and what's going on there. You can configure that file. You can extend it for your own checks for I don't know, pre-commit hooks or whatever you want to add there. I like the idea of how it's done. One thing I want to mention is the yarn dist command. This is basically to compile the stuff for distribution. This is mentioned in the package JSON. One last thing is the yarn watcher that we saw on Peter's talk. Next, first startup. Everything is just in place. Everything is done. We have compiled the bundle. I would say let's start blown. What could go wrong? The question I made when I started up is blown broken now. Are there any expectations? Can we do a poll here? Is there in this window? Can we use that somehow? Let's make a poll. What the audience is expecting if everything is broken or so. I don't know. Of course it's broken. It's blown. This is what it looks like after these steps. But there are still good news. It's not everything that is broken. Plone components just work. You saw the navigation for example or the breadcrumbs because they were standard markup. Everything from pattern flip toolbars, portal ads, forums, the stuff is already working. Before I show some screenshots, I want to tweak my changes a little bit. Yeah, let's tweak the main template a little bit. You cannot override it with Jbots. You have to register it in cml and add the templates to override the actual main template. Example for that is also in the package. After tweaking the main template a little bit, I got back my color structure and now I can say seeming is fun again. You showed or you saw that already on Peter Stark boosted variables for the basic behavior of the theme are there. You can define fonts, colors, shadows, whatever you like. In the node modules folder, there is a variables file where you can see all bootstrap or all of bootstrap's variables. You can touch. I linked the GitHub URL here because then you can access it on the browser quickly. Pretty much stuff and I only show a few of them rounded corners, primary color, fonts, link decoration. That's in the most projects, the main things you have to change first. When this is done, half of the theme has the appearance you want to have. Next is templates. Any expectations on that? Overrides by Jbots. That's pretty much simple. We already know that copied templates to customize is here in the story. For example, I copied my custom navigation. I copied in the template from the search gadget to change the markup a little bit. We had that in the Slack chat before. The first thing I always kick out is this button for search in section, which is only useful on big sites. Most of the sites don't need it and you have to handle that in the theme somehow. Whatever. You can of course add custom views for your story. Custom views for listings, for content types, whatever you know from before. With the big upside, it's much easier to use components or styling that is available there. I have to say a word about columns. I don't like them. For this example, I kept them in. I grabbed the main template and added column structure for it. Diaso is not working, so I have to fix that. Maybe we changed that before we released Plan 6. I guess this needs some discussion, but bringing in some column structure to keep it in order by default is not a big change, so maybe we can do that. Anyway, columns, I had to fix them at the current state of development. In most cases, I need one column or one column plus a site column with perlits or whatever custom stuff I put in. I moved the perlits into one column, for example. That's an example shown later on. Perlits is the next thing I don't want to talk about. They are in this theme. You can change them. You can add them. They are there whatever you want to have. Since they are cards and cards are already styled, they just work out of the box. I just had to add a little margin. That's what I said also before in the Slack chat. You don't need so much CSS to get blown to a state where it looks, in my opinion, better than blown 4 or 5 by default was. Let me show you some examples. Plan 6 Munich. That's the current state, or let me say the state, from yesterday evening. Yes, it looks like blown. It's not Barcelona. That's what I have to say. With the changes you saw before, and like five lines of SAS on the right place, it looks like that. Beginning from here, you can do whatever you want. The theme is basically the same. I don't like some details on Barcelona. This enables me to do something different, like the search gadget on the top right, the navigation with our drop downs, or whatever you want to change. It's a clean theme, and it supports most of the features with some restrictions. This is the screenshot after adding the basic theme. We have the listing here. It works all out of the box because the markup fits to bootstrap CSS. Then we have the folder listing the tabular view. Here you see the tiny bootstrap icons from the icon resolver as well. The breadcrumbs looks a little bit different because their bootstrap default with some CSS here, some template changes. No big deal to start starting from here. This is what I really, in the Tokyo scene, we did for proof of concept. The most ugly part was fiddling around for getting all the forms working somehow. With great effort that has been done, thanks for that. I have to cheer some beers or whatever in the next personal sprint. That's what you get with no single line of custom CSS. It's a form that works out of the box. Whatever style you put on top of that, the forms work and they look, in my opinion, clean and good. You can still improve them if you like. More paddings or colors or whatever. The back end or the control panel also some work in. Peter showed it already. Imagine that would all be pink or something like that. By the way, bringing in a little bit more color, it's not color, it's just a different concept. Ponsim Tokyo, that was basically the history of doing all of that. The idea was modernizing Plon's classic UI. I don't want the idea of having Plon 6 classic out. It looks and feels and acts the same like Plon 5. That's why I started that proof of concept. It was during the conference in 2018 in Tokyo. You can see it in the name. The initial intention was a clean scene based on Barcelona data. Somehow that escalated quickly. Again, everything is bootstrap. I dropped columns. We have only one main column, which is for a single application or a simple page or whatever you want to do. You don't want to have columns. You can add columns, of course, inside of your template or whatever you add. The basic theme is without columns. That means also no portlets. They're also gone. It's fully responsive. I mean really fully responsive. Here is a screenshot from the state from 5.2. I'm going to bring up a branch that shows that in or for Plon 6, which means just deleting a lot of stuff. We have already real-world examples that use that scene or that build on top of that. When they move to Plon 6, we only have to delete a lot of files. The front page, here we have a login example. The forms are styled with a little bit more padding. This is basically the pre-work or the ideas or testing with the CSS for a clean Plon 6 site. I like the topography from Bootstrap. Here I guess it's only open sounds font and it looks pretty clean. With the responsive stuff, I really want to prove that because I like the responsive things I did there. They're actually from my phone and with the sidebar story, that's the next thing, the tool boss thing, it really works. There will be an update for Plon 6. Wait for that when you try it for the quarterf I mentioned the toolbar, yes we dropped it. This is a little bit tricky because we need our toolbar, we need to edit something in Plon. We had the idea to bring the editing features and the navigation together somehow. That's why we created the package called sidebar, short commercial on sidebar that pops in. We can also pin it later. There is, it's only a few work in, but it worked quite well in that project. We have worked on that concept to bringing the editing features and the navigation together. And yeah, that's basically collective sidebar. Check it out. It's a drop-in replacement, and it's only one template to override. So you don't have to fiddle around with, I don't know what the edit bar is like. I like to change it. And it needs a little love for using all that icon story, but basically it worked. That's how it looks like. That's for anonymous users. It includes the navigation and static links and some stuff. And when you log in, you get all the editing features. All at Sli. We missed or we passed out the part for Portlets, but it's a small effort to make it like feature complete. The only thing that's not supported is register some stuff that appears in the toolbar, but I don't want to cover that. Okay. So yeah, that's the response of you again to show that works. I talked about that already. Here is the GitLab URL for when you want to check it out. It works for 5.2 at the moment, and I quickly bring it to plan six, I guess. Maybe I work on that on the weekend during this print. Okay. See the stuff a little bit in action. I see my timer and I have like four to five minutes. I made a little demo on how the stuff works together. And I try to explain a little bit what we see. That's the login form. And after logged in, the alerts are also bootstraps. So they will influence six look the same. Imagine the same works on the phone because the sidebar is from, is the idea is to not go wider than a normal phone screen is from space. You can only scroll up and down with your Sump and it just works. And also the forms are responsive. So they will also work. And the content of course will work anyway. So that's what we see here. Yeah, adding content, nothing new in that place. I mentioned already the real world project that we have. It's a big site or the customer has more sites using this package. And we improving sidebar and Tokyo theme with that project and lots of stuff came back, lots of ideas come back. And I'm really looking forward to the discussion and to the sprint. Give us feedback what we can bring in. Give us also feedback for the Munich theme because I can add what I need for my custom projects. And I try to make them in a way that it's reusable for others somehow. So yeah, I'm really looking forward. Okay. I guess five minutes. My host is getting sweaty now. So I have to stop here. Sorry. Are there any questions? I want to give the word back to the moderator. Are there questions from Slido? I cannot see them on the screen. Yes, there are questions. I will read them out and I'll also send them directly to you in the chat. In the chat. So you can start looking at them in the chat but I'll also read them out so that people who may not have the benefit of seeing them can hear them. So the first question, your demo was a file system theme. What about the through the web theme and approaches? Is there still a viable path for that? I like the idea of concentrating on one way of doing themes and this through the web seeming story is a little bit tricky. The bundled stuff or bundling stuff from less is not part of or not any more part of Plone 6. So this is not supported. You can still upload a zip file but through the web seeming, I really have to pass the questions to people who know more about that. I'm not an expert in that. I decided with Plone 3, they all said you have to do file system development and beginning from that point, I switched to file system and not try to do something through the web except really small changes in customizations or what you have in the project. Okay. I personally, there's some workflows that I use that are based on through the web. So I certainly would value that but I guess we'll explore it as we continue. I can recommend the talk after this one. Mike is showing the YASO examples and I guess he will also mention the options from the file system or from the through the web story because it's still possible to add a steam there and we have a custom CSS, which is inserted in the header. It acts like the custom CSS you know from the past and that enables you at the end through the web doing some small changes but it's not of course not so powerful like changing bootstrap variables and compile the whole stuff. But what to talk afterwards, maybe there are some ideas. Okay. So we'll leave that for the next talk. The next question, are the tabs on forms breaking to a second line which apparently is not good or are they more responsive using bootstrap five? Unluckily, I cannot answer that question. I had no example or I cannot remember if I saw them breaking somewhere. We are in development at the moment. So if they are break or if they break, we should fix them before we release Plonzyx. As far as I know, we use default bootstrap component for that and I can imagine there are responsive classes to add them to get them work. I cannot promise but I think they are somehow built in a way like that. Okay. I think those are our questions. Let me know. Sorry, it's something to look into. At this point, we're going to move over to face to face. So thank you once again for your presentation, Stefan.
Plone Theming from scratch was never that easy! The talk is similar to the previous one (Plone 6 Theming based on Barceloneta) and should give an idea of how to use the work of PLIP #2967 without dependency to any Plone Theme (e.g. Barceloneta)
10.5446/54769 (DOI)
Thanks, David. Let's get right to it. We saw already a couple of seeming talks, so you have a bit of an overview already of what we have worked on the last month on. Now I will go a bit on the classical the other way and also bring in some tips and tricks and opinions from my side. So we are talking here about the classic UI, so not Volto, not the fancy side, but there's still a lot of time going on until everybody will jump ship to Volto. We have big installations out there, so we will need to at least maintain what we have out there and there will be still new projects starting the old way because that's the only resources they have or the only knowledge. So let's go into the details. I will shortly, really shortly go give an overview of what Diaso is. I think most of you might probably know it, so we will not go into detail there. Building a custom clone 6, clone 6 theme with Diaso and we will use the clone CLI to create this kind of package. Plone seeming in the classic UI what we use when we have, when we use Diaso, we have a HTML5 theme and we have a mapping configuration and this mapping configuration maps our theme with the clone dynamic content. We also were able to deploy this as zip file and also upload this in the clone seeming editor. This will still be possible even though we will give up the online compiling in the browser which never was really reliable and because it also handled JavaScript, it actually made a lot of pain for a lot of people and we will get rid of that. If you use just plain CSS, you can even change your CSS still with the seeming editor. This is just how it looks now and will still look kind of the same in the future. There are some more options which we'll show later. Let's see how this works with the mapping. We are mapping dynamic clone content elements into static layouts. You can have your static layout. Either you get it ready-made layout from the internet or you get it from your designer of your choice or you build it yourself. The easy way is now to go completely with bootstrap. On the right side you have your clone. This is an older version but the principle didn't change. Then you have the diazo which brings all the things together. You have your mockup which is the static seam. Then you have your vanilla system which is the clone and diazo rules who merge this all together to a final seemed website. Here's one example. If you want to take over the main navigation you could use replace statement. For example you would select some parts from the content side in this case nav bar minus nav and you would put it on the seam side into the matching point. The selectors can change. You can also use xpath which sometimes might be faster not all the times and it's usually more verbose and harder to read. Separating front and then back end seam. Don't re-rent the back end views. It's possible to seam the back end but most of the time not really necessary. The back end looks fine. It's functional. It also has advantages that it always looks kind of the same because you can use documentation and screenshots and you don't have to redo this all the time for every customer approach. Focus on the front end layout instead. The front end seam part of the front end rules you have here as an overview. What you can see here is we have a main wrapper which has a condition body view permission view and body view permission none. If these classes are in the body tech then we are talking about what we call front end which is basically all the visible parts no matter if you are locked in or not. When you are locked in you still have this part so everything is styled locked in not locked in doesn't matter. Here you have all your standard rules from the past years or if you're coming new to clone seaming that's all documented and you can go that way. The other side is Basoneta itself provides a back end XML. You can use this back end XML like this so you have somewhere in your rules file you have the inverted selector of the body view permission view so whenever you don't have these classes on the body tech then the back end Basoneta XML gets loaded and this will basically remove some stuff from your seam so you only have the toolbar and the main content area and the control panels and all and they look like they look in standard clone so you don't have to care about that. Some best practices. You map your content into the seam with Dayazo but Dayazo is not for everything if you have a need for a special marker and the back end is not producing this marker. Don't try this to solve this with Dayazo or XSL. You can do that it can be handy sometimes but that should not be the main way to go. It makes your seam way more complex as you will see some slides further and it's really hard to understand and it takes sometimes a lot of time and you actually find the system. The solution is fix the back end templates directly so just override the templates either with J-Bord or if you know other ways like with XML or whatever you can there are many ways to do that but the symbols way is just with J-Bord and here you have an example if you have an add-on like clone app layout and you have a viewlet module there and there is a logo pt you want to override it put in your overrides folder just the full loaded name path like you see on the right side and this file will override the original file and you can just copy the original file and make the changes you want. This is actually how we started with all the template changes in clone seam basoneta. Yeah you can also use seam fragments or collective seam fragments is really handy. You can also add browser views just clone CLI add view and you have your browser view with a template if you want so for new templates this is the way to go either browser view or for quick and dirty you can use the seam fragments. Let's have a look how the seam fragments work. We have here a fragments folder so this is our seam folder and in the folder we have some folder fragments and if you put a template here like this this is just a bunch of tile so mainly html you can also have just html but you can also use every thing you know from so page templates so the syntax is the same. So you can basically have quick and dirty just templates sometimes you just want some icons some html stuff and if you do it for just decide and it will never change because some of the elements especially on the start page they will usually not change or not not that often so you can just do your static html here and you can inject it in your seam and just leave it as that. But you can also do something more dynamic so here you see we are actually filling in some variables and I can show you how this is done. We have here also helper classes so we have here this is script python so you can have some some methods which helps to keep your page templates clean. They all have the same name so these three files the pt the py and the xml they all belong together if you wonder what the xml might do let me show that. This might look familiar so this is a super model so the same what you get when you when you create a new content type in the dexterity control panel you can also use this to this code extension to have to just create this snippets and what you are what we're doing here we actually using a choice choice field with a relation widget so this is basically the same you see in the related items widget. This we're using to to select an image from the website then we have some other fields where we say we have some the scales you can select for the image to use we want to set the height for the cover. We have some text fields so we have some title and subtitle we have a call to action link text and also we have a link target which is again a related items widget which points or lets the user point to some content in the website so in the website itself this is the the template we are using so this is just from start bootstrap bootstrap theme we're using it a bit different I don't like this one page scrollers and where's the point in having a one page website when you have a CMS just to edit one side it might be a the solution for for some people and for for some websites but I think blown is probably not the right choice then let's see this we have here a bit of a mix so we have similar elements this is actually just static and this is dynamic so what we're using on the front page is mosaic this looks a bit has a bit too big margins here that's the newer version needs a bit fine tuning but what we have here we inserted a scene fragment and then you get a list and you can place it like every other every other mosaic tile and all the fragments you have they're automatically visible to to mosaic and here we see this is the the cover fragment here we already selected background image image and here we have all our our fields and this is just a static fragment so this is I cannot edit here here we only have the I can insert some other fragments I already added but other than that it's it's the same so what the the fragment does is the the template uses the python functions and the data injected with the supermodel and you can create tiles or fragments which you can configure there are limitations you cannot have file upload right now I think and tiny mc doesn't work so more the simple fields they are working if you need more you better create a new mosaic tile as a package and use that okay let's go on one thing I I like not everybody likes this but it's a bit half half mix with people who are handling design I like using Zax mixins let me show you what I mean if you have markup like this this has already a main wrapper which gives you a hint what this is and has a name so I can address it I can put style sheets on it it has also a content column and then it has extra extra style sheet classes which you might know they are coming from from bootstrap if you look at this example this is way cleaner the problem what you have here is this is this can get really messy I mean this is just these are just some settings you can have easily a bunch of more configuration classes for bootstrap there and usually or often you see you see a lot of marker which actually has only these classes and not even meaningful things like the main wrapper or the content column so after all you have not even a chance to easily style it from from the outside this is clean and you can define your columns and container and whatever settings you have you can do that with scss and or Zass basically the same this is a an example how to use mixins pretty much everything bootstrap uses you can also use with mixins it looks like a little bit like this the the first thing they do is initialization of the of the column and then you you can have different settings for different breakpoints for example this is basically the equivalent to to cal minus sm minus four and for the larger screen this is uh cal minus g minus eight the big advantage is this is style sheet so I can change this in style sheet without having the need to override templates so if you want to go more into detail how Diaso works the rules or the the mappings and all just have a look at the Diaso documentation and also clone the docs clone.org have a bunch of uh infos there let's have a look on the new part on the upcoming stuff as my colleagues already mentioned we now have from the back end coming bootstrap standard markup which makes it way easier to integrate if you start with bootstrap from from scratch now with a Diaso theme uh the integration is pretty simple the most headaches I have actually in matching the the ideas of a designer who wasn't thinking about what things are in in clone and doable and mapping that on on on the CMS itself um but this you will always have so you also have the custom CSS in the cming control panel um uh if you want to to try this out it's actually sneaked already in uh 5.2.2 so from 5.2.2 you can actually find it in the advanced top in the cming control panel and this is basically the last part of CSS which is loaded after Diaso and after every bundle um so whatever CSS you write there it's like the old custom CSS you can just overwrite CSS and as soon as we have all the options in redefining CSS variables so you can do that there too um that's a quick and dirty way um but it's also an easy way and sometimes you just want to stick with the normal clone design and you just need to change some colors and some stuff and for that it's a easy way you don't need to create a c-metal what we also have is we have simplified Diaso rules if you look at this it doesn't even fit uh in this this is the the standard or was the standard uh until now uh in clone 5 in the basoneta and that might be not a problem uh if you uh if you think uh I will I create my own theme and I do it a different way but uh it will bite you still because this is also used in the in the back end part so if you don't want to create everything for yourself then you have to handle this because this is basically placing the whole content area with all the columns and everything it's really hard to to customize or get smaller pieces which you normally can um what uh what we did instead is we refactored that the main reason to to do all this uh is to set uh boots for related classes uh you see we are making quite a fuss here just to set these stashy classes um if it would for me uh I would probably go complete with mixins but um right now we are keeping the the existing approach and uh I moved all the stuff uh as and you have this as an x uh x i include so you don't see it and you can now just import your um your theme uh or your content into the theme like the normal uh the other way this is how I did it in clone 4 um or similar and this is working now again and you can also decide to not grab this thing but grab the the the inner pieces in separate steps so that you have more control that's up to you uh what is plain uh tyazzo so let's have a quick look on the clone cli we are basically using clone cli uh to create a clone package clone add-on and after creating the clone add-on we go into the created package and then we add a clone um uh theme uh the themes are right now not updated so uh after I did that I had to change the structures they are not ready for clone 6 now um but they will uh involve in time so you can just answer some questions uh most of them uh have same defaults you can even override the defaults just if you never tried clone cli try it it it helps a lot so we also have the clone theme basoneta which will be also updated in time and this is basically what peter was showing so um this is the basic structure what you get it's pretty pretty simple and pretty clean when you start clone now you just just see an unstyled theme which uh puts in all the content so that you can inspect it and then you put your um uh then you put your your content in there you can copy it or better if possible like with this one you can just insert it with npm or yarn and you can just import the style the styles or the sass files from from the node modules folder which makes it easier this looks uh this is how it looks when it's done and uh yeah I will publish the the scene um when it's ready uh for uh yeah for you to inspect and have an an example uh as soon as possible and right now uh this is empty but this will come as soon as it's doesn't make sense it makes sense for you yeah thank you um I'm open for questioning great thanks thanks uh Mike I have two questions there are my questions so if you want your question to go in uh add it to slido otherwise I'm going to ask these two questions all right so I go ahead with the questions that I have I noticed in one of your examples you were using mosaic uh one of the challenges I've had with mosaic in the past is that it didn't have an undo or a history is that still the case yeah mosaic is uh it's not like refactured completely um uh robert is working on that uh together with uh with peter for for some projects but there's not much uh much work uh going on they were mainly working on the on the grid system and on on making it work with the newer bootstrap versions so that's uh you and you also have some some new uh extra formatting options where you can set something like in the smaller screen I want one column in the bigger screen I want two so this this kind of responsive settings so you can set with some formatings uh this you will have but other than that it's still what you know okay uh I'm just confirming plon cli now requires python 3 plon cli itself yes so uh you have to use python 3 or pip in pip 3 install plon cli to install the plon cli but plon cli is is only wrapper around bob templates plon and bob templates plon in the current version 5.2 still supports python 2.7 and is also mainly usable down to plon 4 the only things which are not very usable are the seeming templates for plon 4 all others should work at least they are they are tested on on travis the seeming templates don't make much sense because the main benefit they give you is relies on on uh passonata and the back end xml and stuff like that you could use it if you still want to create a diazo seam it's it's not impossible um but other than that it's it's like that we will probably give up the downward compatibility for bob templates plon soon uh so we're like make a re-branch a new version uh and this version will then only be plon 6 and python 3 uh to get rid of a lot of code and also the seeming templates for example they don't make sense anymore if we bring them out but you still can use the then the now the current version um which is uh uh which has got some some updates and uh you can uh use that instead okay i think i'm not seeing any other questions so once again thank you very much mike for your presentation thank you and it gives us some context as to how to move forward with femen this has been a really great track one and on femen and user experience for plon yeah and please everybody uh jump on board and and help we adjusted a few people working working a lot so every help even the smallest one is really appreciate so okay
Plone Theming with Diazo is easier than ever! Integrating an existing static Bootstrap Theme into Plone. The talk will guide you through the process of creating a Plone theme addon with the “plonecli” and the integration of an existing Bootstrap theme via Diazo rules. For the backend we will rely on the Diazo backend rules of the Plone default theme. So we only need to integrate the visible parts of the Theme. With the new Bootstrap compatible backend markup, we don’t have to override templates to achieve the final design
10.5446/54776 (DOI)
Take over. Thank you, Eric. Oh, all right. So in this talk, we're going to describe two different projects, jazz, Carter projects for which we chose pyramid as the platform. Both those projects had a requirement to handle complex permissions, but the details of the projects were rather different. And let us do somewhat different solutions. So I'm going to talk, describe the first project here first, before we dive into the technical stuff. So the first project, the better evidence program was a project for Ariadne labs, which is a joint center for health systems innovation at Brigham and women's hospital in Boston, big hospital in Boston, jointly between the hospital and the Harvard T H chance school of public health. The goal of Ariadne labs is to design and scale health system solutions that can produce better care. It was founded by a fairly famous surgeon and Harvard professor and best selling author, I told the one day you see a picture of him there in the middle of the, the homepage. And he might have heard that name because he writes for the New Yorker magazine. And he was also recently named as a member of Joe Biden's covert 19 transition advisory board so very excited that that's a science is coming back into prominence in the US federal level anyway. So the better evidence program is one of Ariadne's programs and it is a tool to help free subscriptions to digital tools to practitioners in developing countries. So the up to date decision support resource and is an example of such a digital tool. And it is the subscription that is currently offered, although additional ones are planned in the future up to date is typically a fairly expensive subscription it costs 520 US dollars per year. And it is many times the cost of what people spend on health care in the settings of these practitioners that are applying for donated subscriptions. And that helps explain the willingness of the practitioners to go through a somewhat lengthy application process. So the better evidence website that we developed allows practitioners to apply for donated subscription. And it also allows for the administrators to review the applications. It's a fairly long complex five step application process that saves intermediate results at each step. First you create an account then fill out profile information then describe the medical site that the user works at then fill out a 30 plus question application form. And finally you get to write a short essay. So as I said it's a somewhat lengthy process. This is a screenshot of the giant application form just to emphasize the fact that it is rather long. We put a lot of effort into both presentation and widget choices on this form to make it as easy as possible for people to fill out. So the applicant users have a dashboard with dismissible notifications that shows any previous subscriptions and any previous application attempts. Then this is a review then there is a review system for admins. Here you see the screen for the application review work list. They can quickly scan new applications and accept or reject them. The key words in the applicant statement are highlighted in the, in yellow here in the little statements that go with the applications or essays. Admins can select from a menu of canned notification emails to send to the applicant. For example, explaining about how or why they should try again. Admins can also view all the application details by clicking on the application. And that includes a screen to show submission and approval history for the application. And they can leave internal notes on the applications for their colleagues or to remind themselves what's going on with the application. So medical sites have their own review screen. They're managed as separate objects since multiple applicants could work at the same medical site, of course. And new medical sites go through a review process also to make sure that they meet better evidence criteria. For example, for profit sites are not allowed in the program. Sometimes applicants inadvertently create duplicate medical sites by using an incorrect name or spelling or switching the order of words. And when admins discover duplicates, they can be merged together with the, with the original or correct medical site. You see the yellow icon on the right is the merge button. One more feature of the, of the site admins can also set different landing page messages to display to logged in versus anonymous users. Okay, now I'm going to turn it over to Jesse for a technical explanation of the site. Hey everybody. Let's see. So first I'll just go into a brief discussion of why we chose pyramid over other options. From the initial description of the project, we thought we were going to be building a very straightforward CRUD application. Applications we thought were likely to be in a standard format. And there would be just two categories of users. There'd be applicants who would submit and maybe edit applications. And then we'd have a fixed set of site administrators who would review the applications and apply some standard approval process. And since there's no reason to build admin UI, if you don't need to, we would have to do a lot of different things. We'll do a couple of models and views and we'll wire up, wire up the standard Django admin and we'll be 90% of the way there. Of course, more conversations with the client followed and the picture changed. So we had new requirements. And these included in addition to the core, better evidence, administrative requirements, and the application requirements. So there would soon be multiple products with different applicants, applications, and more importantly, admin teams. This multi product team model included a bunch of related feature requirements. So while some application questions would almost certainly be common to all products, many would not. Admins for product A need to review applications for their own product, and then we would have to do a lot of different things. So we would have to do a lot of different things. Workflow steps would likely vary between teams. So think. Simple, prone workflow versus some custom Byzantine workflow. Different teams might elect to do different things there. We needed to be ready to support permissions that varied by workflow state. So we had to do a lot of different things. And then we had to do a lot of different things. These are generally large hospitals or medical schools. And there was also some talk of a referral system. So it was becoming clear that this was no longer. Looking like a simple system. Specifically, we realized that Django really lacks the support for context based permissions that we needed here. And we needed to take action in a context that we needed to take action by in every context. But when you need to dictate that. One person can take some action in a context, but not another context. The support in Django is lacking. And as blown people, we, we really take all of this stuff for granted. And it's easy to forget that. This is pretty, pretty fancy stuff. Really. So we had to do a lot of different things. Based in pyramid. So an interesting aspect of the project was that a lot of the difficult requirements were not immediate. They were coming soon, but not right now. So we wanted to be able to start simple and then add complexity when the requirements actually emerged, especially around these permission requirements. And we wanted to be able to do that. So this was a really interesting thing. And it's a great thing for admins or users than a bunch of complex permissions stuff you're not actually using. We also wanted an established opinionated pattern for a wiring up permissions restrictions that didn't ooze into everything. And pyramid was really a great fit in this regard. So I'll go over some aspects of the pieces and how they fit together. I'll go through some of the things that we wanted to do. And I'll go through some of the things that we wanted to do. And I'll go ahead and go through the things that I wanted to do. These are all just ordinary sequel alchemy models, but this gives us a really flexible way to categorize users into groups, which we can then associate with permissions. So in our project, admins could be either global for better evidence staff or teams slash publisher specific. We can also use a lot of different types of roles. Our role models have a property which constructs principle identifiers based on both a publisher ID and a level like admin, for example. One or more of these roles can then get assigned to a user. So a quick reminder. A principle is just an identity for a user and they're often more than one. So we can use a principle. And we can also use a principle. And we can also use a principle. And we can also use an admin and a Jesse at email. Com principle. And principles are also the way you model arbitrary groups. You'd assign a bunch of users, the planista principle or whatever, and they could be treated as a group. That's basically how we're using them. We're just creating small publisher specific groups. And then the pyramid authentication model provides a standard way of doing it. So we can use principles and the user's associated role records, which we saw on the last slide, and then just aggregate them all. The resulting set of principles is what gets associated with the user when they log, or submit a request. And then the authorization half of this happens through ACLs. Access control list. Pyramid lets you set up context dependent permission by defining these ACLs. Critically for our project, we can programmatically determine the name for a principle to which will grant manage users permission, but only in the context of a specific publisher. So if we're in a part of the site under publisher one, users with principle publisher one admin will get the manage users permission, but principle two admin won't. And then the final bit of glue for all of this in pyramid is simple view decorators to declare permission restrictions. In the view, we just check for the manage users permission and that's all we care about here. So all the detail about how the user was granted that permission is out of our hair. And as we add new teams and possibly new ways of assigning principles and mapping them to permissions, this view will be completely unaffected. It may change for other reasons, of course, but you really don't need to think about the permission system very much once it's set up. You just go about your business with your other changes and that's pretty great. So now we're going to go back over to Sally, who's going to introduce our second case study, which is the Washington trails association. Great. Thanks, Jesse. So WTA or the Washington trails association provides extensive info about hikes in Washington state in the northwest corner of the US. Jesse and I are going to be giving a talk tomorrow describing WTA and their 13 year old Plum site. And I'm going to talk about the WTA and the WTA. But this talk is about their volunteer management system. As part of their mission to protect Washington's trails and wild lands, they organized trail maintenance work parties. And in 1999, a volunteer created a system to manage work party signups. He wrote it in Pearl. And it lasted until 2013. But by then they had pretty much outgrown it. And they needed a new volunteer management system. They hired just Carter to create the new system. And I'll just give you a quick tour of it. So WTA staff enter detailed information about the work parties in Salesforce, which is WTA's constituent relationship management system. And that information then gets displayed on the pyramid site through magic, which we're going to describe a little later. And then volunteers can filter, can search for work parties and filter them by criteria such as when they're happening, the type of work party, the region it is in, features like bridge building or bringing your family. And they can display the results of those filter queries in a list like shown here. Or display the results on a map as shown here. And handily they can also display the results on a calendar. They can click on the work party title and see detailed information and have a button to sign up for it. So the information includes the location of the work party on a map and also detailed driving directions. Pretty handy. The information includes what documents are required. For example, waivers that might need to be signed and you can fill out and sign those documents through this volunteer management system. And finally, the crew corner tab provides a message board where the leaders of the work party and the members can communicate amongst themselves and arrange carpooling, that sort of thing. And then the volunteers can register for a work party by just as a guest as an anonymous user on the site. But they can also log into their My Backpack account to register as a member of the site. My backpack being essentially their member account, their member profile. We'll be talking more about that tomorrow. That is a plon thing. So I'm going to turn it over to Alec for a technical explanation of the site. So I'm going to answer the question why pyramid. Our decision making process here is a little different than for better evidence. So our first thought was maybe we'd use plon. The reason is that WTA's primary website was already a plon site. So it would provide a seamless user experience. They have a ton of existing user accounts and plon has really great role based access control. We already have Salesforce synchronization on their site for all sorts of existing contents. And we also have a lot of other sites that are not available. And Jess Carter had already developed some similar kind of event management booking system using collective workspace for another client. And we thought we might be able to reuse that. But when we looked a little deeper, we thought maybe not plon. Doing a tightly coupled two way sync to Salesforce is tricky and can be prone to errors. And for this application, we would really need the data to be updated and then we would be able to use it as a platform to do that. And that's essentially right away from Salesforce. And that made that may that the synchronization we had in place that involves a lot of asynchronous. Operations not so useful. It was also very likely that, that we would see some performance issues, partly because of the Salesforce thinking, but also just because of general plon stuff and things around. And then we also had a lot of other things that we could do to make sure that we could use the Salesforce synchronization. And additionally, and perhaps the biggest concern from the client was that they didn't want to get further locked into a CMS platform that had a limited developer base. So our second thought was maybe we would build the whole thing in Salesforce. We could create a dynamic react front end. And that's something that really belongs in the CRM to begin with. They're already using Salesforce for all sorts of stuff. And there are a lot of Salesforce developers out there and we were going to be working with Salesforce developers in any case. However, when we look deeper, we thought, maybe not Salesforce. The issues were that it would make it almost impossible to do SSO with the Plon site, which is where all of the user accounts were. There might be some ways to do it, but they would probably be pretty expensive. Making direct API requests to Salesforce can potentially cause performance problems, single points of failure, and can get very expensive, especially when running on a nonprofit account where you have potential monthly API request limits. There's also no good options for caching if we're doing direct client site requests to Salesforce. And it was likely to have poor performance and reliability. So we went with a different solution, which is to take pyramid, which is very Pythonic. Good Python developers are relatively easy to find. Even if you can't find pyramid developers, most Python developers can get up to speed with the pyramid app pretty quickly so that eases maintainability and then react as a front end, which is also widely used and widely known and developers are easy to find. And so the architecture we chose looks like this. The flexible authorization system in pyramid lets us do SSO to the Plon using shared auth cookies. Registration data is cached in Redis on login and used to determine roles and permissions per user and per work party context. The work party data from Salesforce gets synced into an elastic search for high performance queries. The resulting system is loosely coupled to both Plon and Salesforce, except when you're logging in and modifying work party registrations. The system ended up being extremely fast because all the data for work party searches and primary work party views come straight out of the last elastic search. So now we're going to look at some of the code. First, the authorization part. We've got what pyramid calls a tween factory here that manages the SSO with the Plon site. If somebody visits this VMS site, which looks in a lot of ways like it's part of the Plon site and clicks the login button, they'll go over to the Plon login form. And once they log in, they get redirected back to pyramid with the JWT token in the query string. And then the user's email address. That JSON web token gets decoded by this tween here, which makes a call out to Salesforce with a user ID. And that returns contact details. And from that, we get everything we need to know about the user in Salesforce, the user Salesforce contact record, as well as any of the other data that they or any of their dependents have. All of that contact data stored in session and all the registration data is cached in Redis for making permission lookups. So this next bit is probably going to look very familiar to you from Jesse's portion a minute ago. We've got a group finder callback here. Users are pulled out of session and they are added to a variety of group principles. There are some that are user specific, some that are based on a global Salesforce role. And then there are a number that are going to be specific to any work party registrations that they already have. They can be registered to specific work parties with a volunteer role or an assistant crew leader role or crew leader role, all of their dependents also. They get groups representing all of their dependents so that they have permission to control any work party registrations that they for people that they're guardians for. Additionally, though it's not shown here in the code, there's a concept of land managers who are people who have some control over a specific area where a work party is happening. So these are usually people that work for some sort of governmental entity and they have certain rights to control or view information about a work party in their region. Here's the wiring. It's sort of some pyramid boilerplate. It's quite simple. The group factory is added to the authentication policy. The tween factory is registered as part of the authorization policy. And then cookie based sessions are set up and that's all there is to it. Once we've got all that information, we can start defining ACLs. We have a work party object, which is the context for most operations in the system. And that has an ACL and within that ACL, we assign permissions like view, roster, participate, assist, report, search, lead. And those permissions are assigned based on the principles, which can be work party specific. And there are land manager specific ones as well. And then these permissions end up being wired to views using decorators, just like Jesse showed earlier. So in summary, we had two projects, which were quite different in terms of user interactions, data models, backend storage. In the first one, the requirements for the sort of flexible permission structures that we're familiar with from Zocanplone led us to choose pyramid as the framework. In the second, performance concerns moved us away from Plone and the need for flexible permissions and authorization again led us to pyramid. In both cases, the ability to use relatively simple Python logic to map model attributes to roles and to declaratively configure access control is critical. We think the combination of performance and flexibility makes pyramid stand out from other frameworks and makes it ideal for medium to large web applications with complex and evolving requirements. Thank you. Awesome. Thanks, Alec and Jesse. And thank you all for watching. Presentation, we've got some recommended resources here. I'm just going to leave this up for a few minutes for you. While Eric turns to any questions that we can answer. And I encourage everybody to join us in the face to face session because it's really great to see your face. Exactly. Click the blue button below the video on loud form or just goes like I'm going to post there the link right now. And it's there. So I would love to see you all in the face to face. Thank you, Sally. Thank you, Jess. Thank you, Alec. That was really good talk. We do not have questions on Slido and on Slides. So just jump into the face to face. Thanks so much. Thank you. Thank you. Thank you.
Ariadne Labs is a center for health systems innovation whose goal is to drive scalable solutions for better care. One of the projects at Ariadne Labs is the Better Evidence program, which provides free subscriptions to digital tools (such as the UpToDate decision support resource) to practitioners in developing countries. We assisted Ariadne in creating an improved Better Evidence website, making it easier for practitioners to apply and for administrators to review their applications. Although the application is long and involved and there are some interesting twists to the review process, this is basically a CRUD application that would have been suitable for a platform like Django except for one thing: the requirement for a placeful, role-based access control system. Because of this, we decided to use Pyramid. We will describe our solution, and contrast it with another Pyramid site with complex permission requirements, the Washington Trails Association's Volunteer Management System.
10.5446/54778 (DOI)
So hello and welcome to track two of the Plone Conference 2020. We will have the talk about Pyruvate, a reasonable fast, non-blocking, multi-thread, with the GI server from Thomas Shaw. And I will give this to our talker, speaker, sorry. Hello everybody and thank you very much for having me here today. My name is Thomas Shaw. I'm a freelance software developer from Freiburg in Germany. And I'm working with Python since over 15 years. I'm also a contributor to both Plone and SOAP. And about 10 months ago, I started a Rust project that I want to present in my talk today. And the title of the project, the project name is Pyruvate. And it's a whiskey server that I started to build in Rust. Let's start with a quick lens at Python enhancement proposal 333, which is specifying the Python web server gateway interface, aka whiskey. The basic message of the PAP is that you can define a Python call level with a specific signature that will be called by a server software, the whiskey server. Once for each HTTP request, this server receives from a client or upstream web server. And this Python call level, the whiskey application returns a set of headers along with an iterable response that will be rendered into an HTTP response by the whiskey server. And what we can see on this slide is the simplest possible whiskey application. And of course, it returns a hello world message in plain text. Instead of a list, it could also return some other iterable. Now, when we look at the server side, then the server invokes the application call level once for each HTTP request it receives, I've already said that. But there's actually many possibilities for handling those requests. The server could be implemented as a single threaded server. It could spawn a thread for each incoming request. It could use one-to-one threading or one-to-n, aka green threading for doing that. It could maintain a pool of worker threads. Or it could do Python multiprocessing using the Python multiprocessing module from the standard library. And it could do other things. The whiskey server can give hints on how it's actually handling the request through the environment dictionary. There are some keys related to request handling. But whether the application makes use of those hints is totally up to the application. And for example, in SOAP, I could not find any hints that SOAP is making use of those handling hints. On the other hand, on the application side, we normally don't have that simple hello world application. We often have a need to connect to components that outlifts the single request, like databases or caches. And those database or cache connections might not actually be thread safe. And setting them up might be expensive. And all of the above is true for SOAP, because 2DB connections are not thread safe. And they're also quite expensive to set up. So now there's a recipe for disaster when choosing a whiskey server that uses an inappropriate worker model. So one that does not fit your application and your application connections. As a consequence, although there are quite a lot of whiskey servers around, we have a fairly limited choice of whiskey servers that are actually suitable for SOAP and blown. So there's Waitress, which is the default, that has a very good overall performance. And it's a pure Python implementation. Then there's Pyurn that some of you might have heard of. And it's a C implementation. It's quite fast. It's using non-blocking I.O. and it's single threaded. Then there's maybe some new whiskey options that could work with blown and so. And yes, there are some other options described in the SOAP documentation, but that don't have a very good performance. So eventually I wanted more options. So more options, please, for whiskey servers that are suitable for using with SOAP and blown. So I put up my personal wish list. And I wanted a multi-threaded server doing one to one threading using a worker pool, because that is the threading model that I wanted to use. And I wanted a paste deploy entry point, because I like that on Waitress. It's very cool, because it makes whiskey servers a pluggable component that you can exchange easily. I wanted it, of course, to handle the SOAP blown use case. I have other use cases for whiskey service as well, but I wanted it to handle SOAP and blown. And I wanted non-blocking I.O. I.O. And I wanted a file wrapper supporting sent file, which is two features that Waitress doesn't support. And of course, I wanted it to be performing well. I'm not actually, this is not about creating the fastest whiskey server around, but I wanted competitive performance, so that it's actually usable. Non-goals of the project are Python 2, because Python 2 is out of support. And then another non-goal is currently ASCII, not yet at least. So maybe in a couple of months, it might be an interesting feature. And Windows is also a non-goal. So I decided to use Rust for the implementation, because I was interested in trying Rust for a while. And I had some naive expectations. I expected it to be faster than Python. So in runtime performance, I expected it to be better than Python. And I expected it to be easier to use than C. Let me elaborate a bit on Rust's performance. At the end of 2018, I've seen a talk by Paul Emmerich from Technical University of Munich. And he was comparing network device drivers for a specific network card implemented in different high-level programming languages. His argument was that high-level languages are saver, but he also wanted to assess the performance penalties that are coming with runtime safety checks and garbage collection. So computer science students could apply for implementing a user-based network card driver in a specific high-level language. And that would be their thesis. And in the project, they benchmarked each driver. And the chart you can see on this slide is one such benchmark result. And what you can actually see is that there is, of course, the very fast native C implementation. And then next, it's Rust that is following. So here we can see a packet rate. So batch size and packet rate benchmarking. But there is a couple of other benchmarking results. And they all look quite the same. So Rust is always very much on top of when it comes to performance. There's one very specific feature of Rust making it both a safe and a fast language. And that is called ownership. So Rust has its own unique, very unique implementation of memory management. And it's basically a set of rules that the compiler checks at compile time. And these, there are three rules. So each value in Rust has a variable that's called its owner. And there can only be one owner at a time. And when the owner goes out of scope, the value will be dropped. And now a drop, dropping actually is a trade. So a trade being the Rust version of a interface. And there's a default implementation for drop, for dropping a value. And you can override it. So and also you can still control in Rust where your data is stored. It's a systems programming language. So it allows you to do a low level, low level tasks. And you can control where you remember what kind of memory you are using for your data. And how is that relevant for safety and performance? And as an example, I want to quickly look at building a Rust extension for Python. So it's about interfacing with Python. When we look at Python memory management, there's reference counting and garbage collection. So whenever you assign an object, then you increase its ref count. And internally in the C API, there's a macro called PyIncrev that is invoked for increasing the object's memory reference count. And these PyIncrev invocations have to match with PyIncrev invocations. So decrementing the reference count. And garbage collection occurs when an object's ref count goes to zero. And if PyIncrev and PyDecrev invocations do not match, you'll see memory leaks or eventually, co-ordems. Now, as an example, if you look at well-known Python C extensions that we all use, then we have 63 occurrences of PyIncrev in B trees, which is part of the CDP. And we have 19 invocations in SOB interface. And if we compare that to the Rust C Python create, Rust create, that is implementing the Rust C Python interface, then we only have one PyIncrev invocation. And the corresponding PyDecrev is implemented in the drop trade for the Python object wrapper. And as a result, it is very hard to create a mismatch of PyIncrev and PyDecrev invocations when doing Rust extensions. So you're very well set up to create memory safe extensions. And it's getting harder to create memory leaks or co-ordems. Of course, it's still possible to create more references that you actually need. And of course, you can still create co-ordems by, for example, not fetching Python errors from the error stack. Well, other Rust features that are cool include strict typing, meaning you will find many problems at compile time. And pattern matching is really cool. Rust documentation is very good. And there are very helpful compiler messages. And yeah, there's a couple of other good stuff about Rust. So let's start having a closer look at PyRubate from a user perspective. So from a user perspective, at first, it's a package that is available from PyPi from Python packaging index. And so you could do pip install pyruvate. And then it's an importable Python module. So writing your whiskey application, you would do import pyruvate, then you define your application callable. And then there's basically one single function that you can use from this module. It's called surf. And you pass in your application, you pass in a socket to use, and you have to mandatoryly pass in the number of workers that you want. So now, since this is soap and blown, most of you will want to use blown recipe soap to instance with set C build out. And so in your buildout.cfg, if you want to use pyruvate instead of the default waitress, you add it to your X sections, and then you basically use a specific whiskey in a template and that you can pass to blown recipe soap, for instance. And essentially, in this template, you have a server section, so server colon main, and you specify the pyruvate paste deploy entry point. You specify the socket and the number of workers you want to use. And that's it. Let's have a look a bit at the project structure. Pyruvate is hosted on GitLab. I initially created it with cargo new dash dash slips, so cargo new being the equivalent of cookie cutter or op templates. And with dash dash slip, you will create a Rust library. So you end up having a shared object file in Linux. Then there's a SRC folder containing the Rust sources. Then there's this cargo.toml that's created by cargo new, which pulls in all the necessary Rust dependencies. There's a setup.py that you have to add yourself when writing Rust extension, and you need to use a package called setup tools underscore Rust. And it's very easy. When you use it, you get a class named Rust extension. It's very easy to define your Rust extension entry points and essential compilation options. And it also defines the paste deploy entry point. Then there's a pyproject.toml file to specify the build system requirements, which you can look this up in PEP 518. And there's a test folder, which is currently containing mostly Python tests. So these are talks tests. These are, yes, I'm using py.test and talks to run the Python tests. And the unit tests, they are in the Rust modules. So they are in module unit tests, which is one standard of writing unit tests in Rust. Then there is a underscore init underscore underscore the py file in the pyrate folder, which is actually the function definition of the paste deploy entry point. And you import the file wrapper that you need for the whiskey file wrapper feature. And yes, I'm running a GitLab pipeline for the project with two stages. There's testing and there's build stage for building binary packages. Part of the test is linting. So I'm doing a Rust format, which is opinionated code formatting in Rust, making Rust code formatting very simple because you just use Rust format and then your code is formatted the way it should look like. I'm sorry for that. And then there's Clippy, which is a Rust linter giving you hints on how to improve your code. Then I'm running the actual unit tests. And I'm creating a coverage report using KCov, which is a coverage tool, coverage creation tool working for compiled languages. And I'm uploading the coverage to codecuff.io because at the time when I started that project, codecuff.io was the platform that explicitly supported Rust coverage and coveralls.io didn't say so at that time at least. Then it's running the Python integration tests with talks, as I said before, for all available Python versions. And finally, if all tests pass and I'm building wheels, so binary packages. I'm currently building Mani Linux 2010 wheels for Python 3.6 through 3.9. And I switched to Mani Linux 2010 after Mani Linux 1 after the stable Rust that I'm using to build the wheels stopped supporting the old ABI. So there was an error when loading the Rust shared libraries since Rust version 47, I think. And so I switched to Mani Linux 2010 because I did not want to go through compiling my own tool chain in a Mani Linux 1 Docker container. The important thing to note about Mani Linux 2010 is that you will need recent PIP and setup tool versions to make the package work. So PIP version greater than 19. So if you're doing a PIP install pyruvate and PIP prefers ASTIS over wheel, which you will find out when it tries to invoke the Rust compiler, and then there's no Rust in your platform, then you need to upgrade PIP. Same for setup tools when you're using it with set C build out. You need setup tools version creator 42. And yes, and what's wanted is for binary packages Mac OS, I don't have a Mac myself. So if anybody is interested in sending me a pull request, actually, there should not be much difference compared to Linux. Maybe the send file call is different on the Mac. Okay, so let's let's do a quick run through the features. It's a Rust C Python based Python interface. So a Rust C Python is one of, I think, three currently available Python interface, Rust Python interfaces. And I found it to be very suitable for this project. And then I'm using a create called Rust create called metal IO. So MIO, which is part of a bigger project called Tokyo RS. And it's providing non blocking IO. And Pyruvate is giving you non blocking read in all cases and then optional optionally blocking or non blocking write the default is non blocking write. I'm using a worker pool based on thread pool, which is another Rust create doing one to one threading. I said before that I'm having a paste deploy entry point. Pyruvate is integrating with Python logging. So I'm doing asynchronous logging, which means when Pyruvate is creating a log message, it doesn't need to hold global interpreter log. And it also means for you that you can specify the logging configuration for Pyruvate in your whiskey.ini if you're using blown and soap or just any other way that is specified in the Python logging documentation. You can use TCP or Unix domain sockets. So IPv4 or IPv6 sockets and Unix domain sockets. And it also supports system D socket activation. That's not easy. That's not easy to use out of the box with soap and blown because you need the PID, the whiskey server PID to look up the sockets and that's not quite easy to do with blown recipe soap, to instance. Yes, I started to look at performance and it turned out to be a rabbit hole. So performance as in number of requests and amount of data transferred per unit of time. And I wanted, of course, to test it and eventually improve it. So my approach was for static code analysis and refactoring. That was very helpful because Pyruvate really started a say hello rust project. And so I found out, for example, that memory allocations are pretty expensive, which everybody else might already know. Then I struggled a lot with how to actually induce socket blocking. So on a normal Linux, it's not quite easy to induce, to see a socket that is blocking them. So if I eventually resolve to limiting socket buffer sizes on a wake-up box for testing purposes, it would be good to have that with Docker, but I couldn't find a way yet to do that because socket buffer sizes are basically specified on the hosts. And it would be good to have a do to be able to manipulate them on the container, of course. I've been looking at FlameCrafts from PerfData. PerfData is a tool that is collecting performance data on Linux. So it's actually looking at the time that function spent on a stack. And that proved very, very good. It's very good to, a very good way of assessing performance issues. So for example, I found out stuff like, I found out that the call to tool lower is much more expensive than a call to to ask you uppercase. So and I could eventually switch that. And I started doing some load testing with Ceeh and A.B. So Apache benchmarking, which is a very old performance testing tool. A.B. will always ever fetch one URL where Ceeh will download the whole page. So yes, it's quite different approach of testing. And yeah, let's, I want to, a quick glance at some design considerations that are affecting performance. So first is the Python global interpreter lock. So Python code can only run when holding this global interpreter lock. It's like a baton in a relay race. So if you have multiple workers, they really need to acquire the global interpreter lock in turn. And the approach is of course to acquire the global interpreter lock only for application execution and dropping it when doing IO. And there's, there's actually more than one possible way to do this. So yeah, I'm not, I'm not quite sure yet what is what is the best, the best way of doing it. Another, another important issue is IO event polling. So the, this MIO crate that I mentioned earlier is presenting an abstraction called a poll instance, which will do the necessary system calls. And so whenever the server accepts a connection, those, it gets, it's gets this, these connections get registered for read events with a poll, with such a poll instance in the main thread. And so we are doing non-blocking, right? So it could take multiple turns to read the entire request, but eventually we'll pass the completely read requests, plus the connection to the worker pool. And the free worker will pick up the connection along with the request, and it will invoke the whiskey application, it will iterate over the whiskey response chunks, which will need the global interpreter lock. And now if you're, if you're doing a blocking right, you can simply loop until the response is completely written, which is easy. But in the non-blocking case, you can, you will, you will be able to write until there is a E would block or E again error raised by the operating system. And I'm resolving to, when that happens, register the connection for right events with a per worker poll instance. So there's actually not one poll instance, but there's two. One is for reading for read events in the main thread. And then each worker's got got its own poll instance for right events. They're not very big. So this isn't, I'd say overly expensive, but there could also be other options for doing that. And yeah, once I'm, I receive E again error, the worker drops the global interpreter lock and will stash the response for later write until it receives a right, right, rightable events from, from the per worker poll instance. So that's the end of the question. So, yes, so first performance results on my Lenovo X 390 and a vacant box. So, so I'm doing it both on natively on the, on the laptop and a vacant box with two CPUs, two gigs of RAM and eight K of a size limits, which is really small. So it's, it's a lot faster than waitress on a Hello World whiskey application. And it's still faster than waitress when downloading slash URL. So, so in this case, looking at the test criteria for benchmarking whiskey service from the soap documentation, but it's currently slower than on slash blown. That's interesting. I'm still investigating. I don't know why that is. And, and also I'm sometimes seeing contradicting results. So I, I, there's definitely more performance testing needed. And there's a lot of different situations and still a lot of different combination possible combinations of setting up like number of threads and so on. And yes, I'm only starting to, to assess that. Okay. Let's do a little live demo. I've, I've prepared a build out that I've already run, which is defining a, a, a pyruvate, a blown instance using pyruvate. So there's a CEO server that's a blown instance using pyruvate. And then there's another instance using waitress. They are running on different ports. So the pyruvate instance is running on 78 78, 78, 78, 79 CEO server is already running. And I'm starting a pyruvate. It's telling me it's ready. And I've prepared a browser tab for running this and databases empty. So I'm prompted for a new blown website. I'm creating a German side because I'm German. I'm used to German. Yeah. Looking at the console, it's running that I got, I get blown. I can browse a bit. I can upload a file. I'm uploading the rest programming language. Documentation. It's taking some time because it's, it's rather big, but eventually it gets uploaded. I can download it. I could open it here in a PDF reader. Let's maybe have a quick look at the CMI. So going to the control panel and going to the databases control panel, I can see that there's two database connections at the moment, which is corresponding with the two worker threads that I defined in the build out. When I reload, I see that these are always the same two connections because the application, the whiskey application gets copied to each worker thread. And then once upon startup, the connection will, the SUDB connection will be opened and it will be reused all the time. Yes, that's basically all there is to see. It's just, it's just blown. This is blown five to latest. So it should be five to three. I've built it on Saturday. And what I can do now after browsing a bit is I can do, I can do some load testing like specified in the soap document. I'm doing it like specified in the soap documentation. So 100 concurrent client success so I'm not running it for 100 seconds because that would take too long. I'm running it for 30 seconds. And yes, while I'm doing that, we could actually, already I think I'm going to do the same thing with waitress later on just to get a comparison. But while I'm doing that, we could maybe already have questions if there are any. No, at the moment, there are no questions. Okay. So I'm starting the waitress instance. And you see, we are seeing waitress starting up. And to be fair, sorry, that was the wrong page. And to be fair, I'm going to browse a bit on the, on this page just to warm the caches, of course. And then I'll do the same thing for the waitress instance. So we can get a comparison on performance, which is of course, I am biased, of course. And as I said before, there's different results for different URLs. So you definitely have to do your, your own benchmarking if you want, if you want to, to pick a whiskey server. So what we can see here is that hang on, where are we? So there is 5809 complete requests in 30 seconds. And here we got 8171. So this is by Ruvade. And it's got a couple more complete requests. None of the, none of the servers had failed requests and corresponding with a bigger number of completed requests, there's more data that has actually been transferred. Yes. Okay. So that, so much for the demo. Yeah, that's basically what I wanted to tell you. I'm planning a 1.0 release for end of this year. There's a couple of things that I, that I still want to, want to do. Currently we are in version 0.8.3 on PyPI. It's still a beta version. So it will have bugs. But yeah, I'd be happy if you want to, you want to give it a try. There's a, there's a feature branch on GitLab that needs some work, which is implementing Keep Alive and Chunk Transport. And then I want to integrate in the 1.0 release. And as I said before, I want macOS support. It's still open. I want to optimize a bit the pipeline. For example, I'm currently compiling the KCov binary each time I am running the coverage reporting because there is no KCov binary package available for a Debian Buster, which is the base image for Rust Knightly. And then I want to fix an issue with the ThreadID, not that it's not reported correctly when doing async logging. And of course, I'm doing more testing and I hope that I can fix a couple more, more bugs. And that's it. Thank you very much for your intention. And yes, if you want to give it a try, I'd be happy for feedback, for your feedback. I'm looking forward to your feedback. Thank you very much. Thank you, Thomas. There was one question in the Slido. I guess it's kind of answered. The question was, is it a project to keep an eye on or is it ready for production yet? It's better. I'm using it in production on a couple of smaller sites. And you can definitely use it for development, as I say, drop-in replacement for waitress. And I think you also can use it for testing. And I think it's not a replacement for waitress. It's an alternative. So it's got a couple of features that are the same, which are the features that you need to make it suitable for use with soap and blown. And then you've got a couple of different features, like non-blocking I.O. Yeah. And that makes it a bit different from waitress. So you might want to try it in different situations, or you might just want to compare it. But I think you can start. Yeah, start. Yeah, I started using it in production. Okay, then thank you from Vincent for this answer and thank you, Thomas, for the talk. I would encourage everyone to come to the face-to-face. You can find the link down below under the stream just by click on join face-to-face. And yeah, we have a break on this track. I guess for nearly an hour. So see you back in an hour, hopefully. Thank you. Thank you.
Pyruvate is a non-blocking, multithreaded WSGI server with competitive performance, implemented in Rust. It features non-blocking read/write based on mio, a rust-cpython based Python interface and a worker pool based on threadpool. The sendfile system call is used for efficient file transfer. Pyruvate integrates with the Python logging API using asynchronous logging. PasteDeploy configuration and systemd socket activation are supported.Beta releases are available for CPython (from 3.6) and Linux. The talk will present the current state of the project and show how to use Pyruvate with Zope/Plone and other Python web frameworks. Another focus will be on the roadmap towards a 1.0 release scheduled for end of this year.
10.5446/54779 (DOI)
Well, hello everyone. My name is Carlos de la Guardia and I'm going to give a short talk about form library that I've been playing with. It's called Questions. And, okay. So here we go. Okay, why did I write a form library? Yeah, no excuse there. In my defense, many of us have written form libraries for the last year, so I took the liberty to get Eric Brehls to it here to talk a little bit about myself and all the people that have made form libraries. Maybe when we're old, we're going to talk about a lot of different form libraries that we created. Yeah, I just thought that I would write one. The idea is a simple library for displaying and handling forms. They are defined using Python code like many other form libraries like Deform or WTF forms. And the gimmick here or the different thing is that the forms are rendered using a library, a JavaScript library called Surbi.js. The idea is I always, when I've been working with even small pyramid projects or flash projects when you have forms and you're trying to use a form library like WTF forms or Deform, you end up having to write a lot of JavaScript code to connect things together and then some widgets that you want to use, they are not compatible with the library, so you have to create custom widgets or try to figure out how to make them run the forms. So I thought why not just do away with all the Python rendering and forget about the Python templates for the form elements and just skip the Markov generation and use JavaScript. So that's the big idea here. And that's what the question does. Questions gets on the back of Surbi.js, which is according to them is a modern way to add Surbi's and forms to your website. It's a very mature library, it's been several years in development and it's compatible with most JS frameworks with React, with Angular and Vue.js and JQuery. So it's pretty easy to use it and no matter what JavaScript framework you're using you can add functionality to it using your own framework if you need it. It has lots of features that were built initially for Surbi's, but it's now a full-fledged form library and it also has many things that are unique to Surbi's, like for example you can define correct answers for questions and then use that as a sort of test or quiz and all the functionalities included there. So it's pretty easy to define simple quiz applications using this library. The license is MIT, so it's open source free to use, free to distribute and that's a great thing. It also has a form creation tool, JavaScript thing, but that's not open source. It's free to use though, so I want to show it a little bit. Now, my library questions. The features it has, since we use Surbi.js we get a nice integrated user interface and the JavaScript widgets are really powerful, not just the ones that didn't close, but it's compatible with some other pretty popular add-ons like select 2 and some jQuery UI things or bootstrap things as well. So like I said before, it's compatible with Angular, jQuery, knockout.js, React and vue.js and questions make sure that you get the right files for each version, so you don't have to. If you want to get into JavaScript you can take control, but if you just want to display things from Python and you just want to use Python, questions can take care of everything. There are more than 20 question types from simple text inputs drop downs to elaborate widgets, for example panels, dynamic things and matrices that you can pretty easily create. It also has multiple look and fill options like themes where you can change their form appearance pretty easily. It includes bootstrap CSS support. It also has full client side validation and questions that's server side checking as well. So that's the bird's eye view of what it can do. How does it look? For example, to define a form, you just sorry, I clicked a little too early here. There it is. For example, the concept of panels is that you can define a form that has some controls, for example here I dropped down question and I'll text question. And then the real form you can use that other form as a panel using the form panel thing that questions provides. And you just pass it the other form that's going to be the panel and you give the title. And the dynamic thing means that the user cannot as many social media platforms as desired. So for each one he will get or she will get another opportunity to add a media with a text. So as you can see it's pretty easy to define a panel and use it like a field set in long terms. Like a field set and it's pretty easy to define it and just use it here. That's one of the concepts that we have and questions also make it very, very easy. Super easy to have multi-page forms that are completely wired. You get a next and previous button and you get a complete button at the end. You can go back and forth through the form as you want and you don't have to take care of controlling that on the server side, just do it here and when the form is through and the user completes you get all the information from all the different forms. To define a multi-page form you just create a couple of forms with some controls, for example page one and page two here. And then you add the different fields that you want to use. Take a look for example at the drop-down question that allows things like passing a URL for a RESTful service to get the values. So the library takes care of many of the things that we usually have to program in the Python side. And to create a multi-page form you just create a new form and use a thing that I call the form page where you get to associate the form, each form with a page, give it a title and then you get a multi-page form. Right now I'm going to go through the code examples quickly and then I'm going to show how it looks in practice. Here's a simple form, for example, that I have. You can see the different fields. One thing that is pretty easy when using this library is to define live form behavior when you need to show some question only if the answer to another question is a specific one or you need to filter choices in a select or in a checkbox group or radio with it. You can do all that pretty easily here. If you take a look at the bottom of this code slide, visible if right here, you say if the language question equals Python, that is the answer is Python, then this question will be shown and if not it won't be shown. Things like this are pretty useful to get very interactive forms that change according to the user responses and it's really easy to handle that which it's not usually that easy when you're dealing with defining the things in the Python side. So in this case, all the behavior of the form is defined in Python code and you get still a lot of control over the JavaScript side of the thing. To handle the form data once the user is complete, just simply define, for example, this is like a plus application, you get a view that's post view and to get the data you just get request to get JSON and you get something like profile data. So it's all JSON. You can take the data and it generates JSON. The whole form actually is rendered as a JSON thing which is pretty useful if you want to also export the code to use somewhere else. You can even generate a static site that has a quiz or form without any other Python code and just use it as a static file application. Here's an example of how you would get the form to display this data that is about the profile data that has name, email, per date country. You can display it in a form like an edit form using just passing form data and JSON data that you want. So it's pretty easy to handle all the form data. And to display the form there are a couple of choices. Like I said before, you can just generate a full HTML page and display that and that would be a standalone thing. Or you can integrate it with something like for example here we have a ginger to template and the form gives you several things to take care of stuff. If you want to take care of the JavaScript completely you can do that. But the form also allows you to include just the JavaScript that is required according to the widgets that you selected. So if you use this loop for form.js you will get just the JavaScript that the widgets that you are using on the form need. The same for the CSS. And the JavaScript for the form is inserted using the render.js called from the form. The only thing that you need is to define an ID with the name of the HTML ID that you pass into the fault is questions form and you can pass whatever you want. And that's all you need to get the page rendering. So you can combine this with your own resources and templates and just insert the form inside your application whatever you want. Let me show you a little bit. Hold on a little bit. Now I'm going to show you for example in this simple form. Here's a code for the form. As you can see it's similar to the code that I was showing before. We have the dynamic thing, the live control thing here. And it's a simple class application with a form that you can post that as well. And it's just a server. And run it. And here it is. As you can see it's rendered pretty quickly. If I choose Python as a language it automatically shows me another question where I can choose which is my favorite version. But if I don't like Python then it doesn't bother showing me that question. If I don't feel the value of a question I get immediately a validation error and I can't move to the next page until I manage that. In this case just complete the form. So that's one example. Let's take a look at another one. Here's another one. It uses the not dynamic panels. You can have as many of something as you want. So here it is. An example. And this is a multi page form. We define three different forms. They can also be used individually if I wanted to just use this form. For example page 2 as a standalone form. I can do that. Just render that one. Or I can integrate it into a complete multi page form using here. So it's a very simple syntax to get the multi form going. Of course in the time-springe. So you notice that this is a different style of rendering the form. I just changed the name of the theme and I get a completely different thing. And then I get a multi page form. I can go to previews and the next page is pretty easily go back and forth. Optionally if I select some fields as required I cannot go to the next page until I fill up the required fields. And I can pick my favorite sports. And this is the panel theme that I said. Let's say that I'm an adult Cowboys fan. So I say Cowboys, my team and the sports American football. And I want to add another one. And I can add as many as I want or remove them. And everything is ready to be used just by defining a dynamic panel and setting the options. So it's pretty easy to use. And it gets a lot of interactive widgets in your form pretty easily. You can also use the values of other questions. If I use my name here. And I can then have other questions that take the value of previous questions and use it in different ways. And if I say just a question then I get the other question to show up. If I say no, it doesn't show. And when I'm ready I complete the form and that's that. So as you can see it's pretty easy to get multi-page forms that do lots of things pretty easily. One more. Look at the definition so that you can see how little code is required to get that form that I show just now. Just that. That's a screen full of code. This is enough to get that form going. And I wanted to show also this is the Suruby JS page. And one of the things that they have is the Suruby Creator which is a full-fledged form editor that you can use. Here you can see a very complex form that has lots of pages. And you can set the options for all of the different fields. So you can see it allows you to get really into the details of a very complex form and make it do what you want. And the one thing that's very neat here is that it gets you a JSON editor where all the form definitions that you create are here. It's about 1500 lines of JSON that I copied and I have here. It's the whole JSON thing. And I can, using a form constructor that we define get that JSON from the file, read it and create a form from JSON, getting that JSON there. Just one line of code. Once I have the JSON just one line of code. Oops. Sorry. Sorry. I got a bit of a small room. I need to remove the directory. Oh, come on. Well, I am going to put a I am going to start with that. What I was going to show you is that when you use the from JSON thing and you have the form in the right place and they're not trying to present it's life. You get a form like this one which has nine pages and you can fill it up easily. Okay. Sorry about that. Back to the presentation. Inside the form, the library is the bydantic library which is a very, very neat way to use the Python type hints to create models using schemas. Basically everything in question is from a base model that has a configuration and the configuration just makes sure that all the fields in the JavaScript format of the SurbJS form definition are translated correctly to Python so that we can use camel case and not make our forms look ugly. We don't Oh, sorry. We use the snake case. Make a form with camel case. Show you what a model looks like. This is a question. And it has all these attributes. And these are all the properties that are accepted by the library. We have a complete validation using bydantic so if you initialize a form and you give an incorrect parameter or don't give the correct type for something, you will get an error. So it has to be correctly typed to get the bydantic is a very nice library and I hope I can use it more to do more things with this library in future versions. To generate the form, we basically generate a JSON code and get for the French widget we have some JAS, some CSS files and the form elements that were and we have this step where we construct the form and just go ahead and generate a full JSON thing and that's what is presented in the form. And that's more or less what I have. This is just an alpha version. I started working on this a couple of months back and I'm just starting to imagine where we can go. It needs a lot more examples. Though it has some documentation and let me this is the the GitHub repository questions and we already have the read.docs documentation as well with some information more or less complete information about the features, how to use it, code samples for everything but still it needs a lot more real life samples and stuff. It's on right, so you can just install questions and get it running and that's not the correct middle. And well that's about it. I would like to explore how to integrate with things like Django and other forms, other frameworks, it's pretty easy to use already like you saw in Plask and should be very easy to use in Pyramid. But Django is another thing long, I don't know. We can also I would like to generate code for the forms created in JSON, like the one I was unable to show you, sorry again but the idea is to generate the code, the Python code for the form so that once you get the JSON data you can create the form and then modify the Python code. And since we're using Pydantic, I would like to also add more validation that uses type hints and also allow some sort of form creation from the Pydantic schema directly. Those are the things that I think I could work on. Okay, questions. Thank you very much, I'll be on the in case there's any question and that's me, my email, I'm on Twitter and you can visit my repository for questions, create issues or anything, give me ideas. Thank you very much for your attention. Have a good conference. Thank you Carlos, that was fascinating. I'm so pleased that you were able to participate in this conference.
Questions is a Python form library that uses the power of surveyjs for the UI. The philosophy behind Questions is that modern form rendering usually requires integrating some complex Javascript widgets anyway, so why not skip the markup generation completely? This talk introduces the library and shows how to use it and when it's a good fit for Python web projects.
10.5446/54782 (DOI)
Hey everybody, I'm here with Eric who is going to give us an interesting presentation about some front-end stuff and the single-page application. So Eric, take it away. Thank you Andy. Okay, well, I need to be honest, I feel sad. I really feel sad because of these COVID-19 things, this situation. We can see each other like we do every year and it's really a problem. And I mean, I saw on a very reliable social network that it was mentioned that the other conference last year was probably the first COVID cluster in Europe. Yeah, I think it was on Facebook. So yeah, definitely something true. And yeah, so it's probably the right that we keep it low profile this year I guess. But still yeah, it's super sad. And so that's sad, but on the other hand, I was also thinking about what it means for being a conference speaker in this situation. And actually, as a conference speaker, it's maybe a fantastic opportunity to do stupid things that we could not do if we were not at home, right? So here is my personal take on that. So bear with me. Here is my thing. I think that sharing your screen when you are remote speaker like this, it's super boring because you get the slide taking all the space on the screen of people being at home. They are probably sitting in there so far or something. So they will feel bored really rapidly. They just see my slides. And I don't want that, right? And actually, the human brain is specifically trained to focus on human faces obviously. They are not trained to focus on slides. I mean, if we get to a point where people, brains are actually trained to focus on slide versus on faces, I mean, that would be a totally different civilization. And I don't see where they are. So I was thinking about how I could show my slide without losing my ability to show myself, right? So my idea was I could be the screen. I could video project my slide on myself, on my belly, right? So I do have a video projector and I try that. Actually, it does not work because my belly is not flat. It's tragic, but sadly, it's true. So my second idea was maybe I could take a board like this, right? I take a board and it's a perfect screen. It's made of wood. It's easy to handle. Perm is actually too bright. And I mean, if the board of wood is brighter than me, it's a bit insolent. So I was kind of stuck and I was thinking what kind of superpower does it get to be able to let people enjoy my ugly face and enjoy my slide at the same time? And as I was thinking about superpowers, I was thinking about no superpowers. You don't need superpowers. Think about Batman. Batman is super hero, but he has no superpower. He just has his mental strength and the dark suit. Then if I think about me, I do have mental strength as well. I just don't have a dark suit. Look at that. Origin. So I went to my dressing and I was looking for the darkest suit I could get. And look what I found. Yeah, a plant conference teacher. That's super dark. So here, look, I can show my slides. Yeah. So maybe you cannot read it super nicely, but don't worry. My slides are not super interesting anyway. So after all these nonsense, let's go with the actual presentation. Second guessing, the single-page app pattern. First of all, about me, I'm a developer. I used to be a Python developer. I'm not a front-end developer focusing mainly on TypeScript and Angular at the moment. And I'm currently working at Ona. So Ona is building what we name a knowledge integration platform. So the idea basically is that any person on any company has tons of information spread in many, many different locations like Slack, Google Drive, or Dropbox, or emails, or whatever. And what we do at Ona is we collect all of that. We sync them all, index everything, and then you can explore all your fragmented knowledge from a single entry point. So that's just syncing of it like what Google does for the public internet, what it does that with your own data. And we are a startup. We are based in New York, in Barcelona, in Raeli, San Francisco, London, and Toulouse. Where I am. I'm in France. So yeah, that's who we are. So about this talk. I started thinking about this talk when after reading a very nice blog post from Tom McRite. So Tom McRite is a quite famous developer. And it was, so I will not detail, right? This blog post, you can read it. You can find it easily. But it was actually discussing the relevance of what we named the modern web. And by modern web, most part of time, we mean single page application. So known as SPA. The question is, is single page application a good solution? But before that, let's start with etymology. Because my teacher always told me, you start with a definition from the dictionary, you start with etymology, and then you can start thinking about the actual topic. Okay, good. So shall we say SPA or SPA? Or we should say SPA for single page application, if you pronounce it like SPA, well, that's not the same thing. SPA, interestingly SPA, so it's like a base, it's taking etymology from the SPA city in Belgium. That's why I'm mentioning it because we are actually having this conference virtually in Belgium. And the SPA city take his name from Latin, which is Spaghet, which means scatter or sprinkle. And that's a bit ironic when you think about what SPA are actually. Because what they are? Well, the SPA approach is basically about exposing a single physical web page, so a single HTML page, index.html, containing an enormous GS bundle. That's what it is. And this enormous GS bundle might potentially be divided into smaller sub bundles if we want, but basically it will implement the entire application. And well, along the way, it will make requests to one or several back end in order to get data and will render it locally. That's what SPA are. And if you think about, well, Volto, for example, is a typical SPA, right? So the question is, why are we doing that? Well, I guess we started doing this SPA thing because loading a new page every time we interact with the page. Like every time the user is going to click on a link, we were asking to the server a new HTML page and getting back and rendering. And it was considered slow. Well, I'm not sure the slow aspect of it is a real problem. Because nowadays, is regular web slow? Is internet slow? Think about it. Google is making this Google Stadia thing where it's about streaming server side rendered video games. So if we can render a video game on the back end and play it on the front end, we do have some bandwidth, right? So I'm not sure the SPA thing is about reducing the slowness. It's not that. And anyway, if you consider people having low rate connections, well, I'm not sure SPA is making anything better for them. So it's not about the rapidity of the performance. Performance is not the problem. The problem is the stateless aspect of the regular web model. When you go to click on a link, you get an entire page, you render it, and it's stateless. Every time you get something totally new, you can start from there. There is no history, et cetera. What we want is, well, cleaner interaction. We want the user to feel like he's facing a standalone application, even though it's implemented through web components, web elements, and web technology. So we want something very dynamic, not a sequence of web pages. OK. And well, that's what we can achieve with JavaScript, right? With frontend. And implementing this kind of complex system with frontend is, yes, probably easier and we're, let's say, organized on a better way if you do that through an SPA. That means putting all the code together, making an application per se, to implement this kind of nice behavior of the UI. Because what were we doing before that? We were actually putting some chunk of JavaScript in server-side render page, like we used to do with Plan 4, for example, if we go back there. And that's not efficient. And we discovered that putting GSA everywhere like this is not a rational way to implement the GS server-side application, right? So SPA to this regard makes a lot of sense. But the problem is we are inverting the original web model. OK. The original web model is about having an application, well, the browser, right? The web browser rendering pages, which are the content. And with SPA, we are doing the opposite, right? We are using a page to render an app instead of the opposite. So yeah, inverting the model. And what's kind of surprising is we are investing a lot of effort to mimic the regular web behavior. It's mean. So we break it and we spend a lot of time making sure that while the user feel like he's navigating through pages, we make sure the address in the URL bar is updated. We make sure that he can navigate back and forth. We make sure that the user can share a page even though it's not an actual page. OK. And we make sure, of course, that SEO is working fine, that social network is getting the proper meta, and so on. A lot of things just to mimic the default behavior of a web page. So that's, well, a typical love-ate relationship here. On one hand, we are aware that the original web model is very valuable, and we try to preserve all its benefits. But on the other hand, we are just ignoring its main principle. And while when you twist the model, there are consequences. I will not go into technical detail about those consequences, but we all know the least of them. The rendering is slow, SEO support is super painful, GS building is expensive. I mean, yeah. A lot of trouble just for that. Fortunately, we are ingenious, right? So we know that every problem has a solution. Here we go with the solution. To mitigate the problem, we have created a gigantic technical stack. And what it does? Okay, it does work in some cases, but it also creates other bad consequences. That is always the case in this kind of situation. We are in a typical case of complexity denial, where tools don't seem complex to the person who builds them. The new tool creates new problems we didn't have before, and all those tools might work together, might work well in isolation, but they don't necessarily work well together. And that's where we are. And so to come back to the article I was mentioning initially, I'm cutting Tom Mack right here. He said, Fremont should lure people into the pit of success, where following the normal rules and using normal techniques should be the winning approach. I don't think that react in this context really is that pit of success. The naively implemented react SPA isn't stable or efficient, and it does not naturally scale to significant complexity. And of course, so the react part here is not mine. I could say exactly the same thing about Angular, which I love, but exactly the same problem here. SPA not stable, not efficient. So we are in a situation where the modern web is improving a lot the user experience. Like, I mean, we could not create something like Google Docs, for example, to with server-side only technology, right? I'm currently using Google Doc to run this presentation. I made my slides and so on. Yes, this would not be possible with server-side only. And sometimes I think, do I miss Python? Do I miss the old time? Well, I do miss the time where I had all the information I need in a single HTTP request, and then I just had to make what I take to return a proper response. The thing is this paradigm is over. This is what was called the Web 2.0. So that's 15 years ago, right? How modern is that? I'm not sure. Well, since this time, web is not about putting hypertext online anymore. But it's still about content. So one of the reasons for having SPA is separation of concept. It's a good principle, of course. So yeah, we try to separate the presentation layer from the persistence layer, security layers, etc. That's what SPA tried to achieve. Yeah, good, good. But there are other layers. In our typical SPAs, we are mixing together two different layers. What I would call the browser layer, which is providing the ability to access remote content, to navigate from one to another. And what I would call the content view layer, which is about the content itself. So maybe the text, of course, but the inner layout, the inner logic of the current page. With the original web model, we kept those two layers very separated. And as a result, I can add a new page to my site without recompiling Firefox. And I can recompile Firefox without regenerating the entire Internet. That's the same thing, right? The opposite would be total madness. While by implementing a part of the browser layer in the page, SPA is getting to this kind of madness. So, we are stuck, kind of, because modern web, SPA does bring powerful enhancements, but the way we are implementing it is not the same. It's a typical take, take it or leave it situation. So I'm wondering, what do we want? Well, what we want is proportionate complexity. It should not be SPA or nothing. And to be fair, it's actually not. The old web, let's call it that way, is still around 80% of websites use this PHP. 77% use JQuery. And these numbers are actually increasing. Why is that? Well, they are increasing because a number of websites are increasing and the majority of them, they don't need anything super complex like an SPA. So what are the solutions? I will mention a few technical things I have noticed recently. First, micro frontend. So you may be heard about this micro frontend concept. What is it? Well, micro frontend is actually macro web components. So Mike on Monday made a demo, a very nice demo about custom components. So you can create your own HTML component. You implement it yourself with JavaScript and you can reuse it wherever you want in a rich text editor if you want as Mike demo. Okay. That's a custom component. It's just a very small piece of UI. Like it could be a select or a checkbox, but we make it a bit different, a bit customized, but it's not like more complex than that. You will get all this information from its attributes. A micro frontend is much bigger than that. It's an entire chunk of the full application. If we talk about Plom, let's say you could have the sharing feature as a micro frontend. Okay. So this concept is interesting because a micro frontend is the part of the application that you compile and develop and maintain the part from the rest. And that's something that we can do now with Webpack 5. Webpack 5 is proposing the module federation, which allows to compile code, importing, reading, remote modules, remote modules that are not compiled within the full, the current project. They are external to the project. And to code, Zach Jackson, who is one of the authors of the module federation feature, he said, let's say each page of the website is deployed and compiled independently. Well, of course, for someone knowing about the original web, compiling pages like crazy, but I just love this approach. That's a great move. I mean, you would compile each page separately. So compiling the need, what you need in this page, it has an actual, let's say, JavaScript project and the rest would also be independent JavaScript project. And the way it works with module federation is that wherever you land on your website, so on any page, you're going to land on the page and it will become the host for the authors. So you start from there. It will load all the shell dependencies and the vendors and et cetera. And then when you navigate to other page, it will load lazy loads, the micro bundles needed for the next page, and it will plug dynamically into the host. And the host can be anything. As I said, whatever the landing page is going to be, you're going to start from there and work all compiled separately. That's a great, great thing. The other technical thing I should mention is ES6 native support. So I know we have been mentioning that already in the conference and module federation as well. I think yesterday that was, you honest ES6 native support is interesting because it allowed to import export modules, natively in JavaScript in the JavaScript natively supported by browsers, right? If you combine that with HTTP2, well, you need no bundles. I mean, you can come back to this paradigm where you have a file tree structure for your development with granular date. Like you can go and change this part of the application without bundling the full thing. You just go to your project tree. You change one GIS here and you push it just like we used to update an HTML and push it to the server or update a PHP file, why not? And push it to the server. That's a big win as well. Because I mean, honestly, bundling is the most brutal thing ever. It's awful. I mean, it's terrible. And we need to get rid of that. And my personal point of view is we should make sure we respect the layers. The layers I was mentioning earlier. As we just discussed, but it should be done according to the proper layers. So you should have a browser layer, which is content neutral, acting as a browser. So it offers navigation, authentication, connection to the backend, rendering machinery, and also maybe services like state management, why not? Or Apollo for GraphQL, these kind of stuff. Then you have a content layer, which could be actually very simple. Maybe it's just HTML. If it's good enough in your case, HTML is good. You could have your browser layer, which is a front-end application, taking some HTML and rendering it wherever you need. Or it could be maybe a bit more complex with JSON. So you get JSON, your content layer gets JSON, but maybe a template. And the browser layer will merge that to render the JSON with a template. So basically, it's like if you could teach your browser to render JSON content in a given context. And of course, if you have a very, very complex application, your content layer might be specific components, specific feature you want to render into a specific place, but once again, dynamically plugged into the rest. So both layers are totally independent. They are built separately. They might be based on different technology. Like nowadays we have readers or Apollo implemented in techno-neutral way. And now the question is, I am proposing a more complex solution to solve a complex solution, a complex problem we have. Not quite. Not my idea. What I mean is we should have a generic browser layer. It means as a developer, as a website developer, and as a web application developer, I would not implement the browser layer. It would be generic. It would be something which is common to many different use cases. So providing login, providing navigation, providing a UI library, providing core frameworks and utilities, that's totally generic. I don't want to implement it every time. So we don't code it. We actually we just declare it. For example, if you have dependency, you want to pip install stuff, you just add those dependencies in your file and then it's done. You do not copy, past, implement, build, etc. No. So instead of what we have currently with SPA, where whenever you add a new plugin or whatever dependency you have to plug it manually into your code and it breaks stuff and you need to rebuild everything and so on. No. Think about the plugability style we have in in clone and in Zope. I mean, plugability is key for simplicity. So what you, the way I imagine this browser layer is that we just declare stuff. I declare I want material UI. I declare I want Pasta Naga UI. I declare I want Apollo backend. I declare I want such OS mechanism. I declare that through the file, the text file and you're going to run, you're going to get the proper component wherever they are in registries or wherever. And then as a developer, I just need to focus on the content layer. And if HTML is good enough for that, well, that's done. Okay. I just render HTML. It's just not the old HTML principle where the browser itself is rendering HTML and it is getting not interactive, let's say the SPA aspect of it. SPA quality is implemented by the browser layer and I don't need to care about it. It will be done transparently for me. So the first step, if we want to achieve that, we'll be probably to implement this browser layer, a very generic one and provide it as a reusable solution. Like for example, we have open source application on the server side. CMS, for example, CMS is a tool that is a generic tool allowing to build a custom website. Well, it would be exactly the same thing. We invest time to implement this generic layer and we reuse it in many, many contexts. The second step, well, maybe the second step would be to push that to the browser itself, make a native support for this feature. It's a browser layer for a reason. It should be into the browser. And my point of view is that browser must address the modern web needs. And it's not just about running GS faster year after year. It's about providing tools and utilities that everybody needs when building a web-based application. And I think that if we are struggling today with this crazy tech stack, it's because web browser failed us as supporting complex web app. But well, I think we can fix that. So that was it. Thank you all. Here is my contact information and I will be happy to answer any question on the JITSEE room later. Thank you, Eric. That was great. I appreciate you being able to participate and to provide such an interesting talk. We only have one question in the slide here right now. And we do want to know, did you have to train for a long time in order to be able to hold up a shirt for that entire length of your talk? So that's a good question. Actually, I started with the idea of putting the T-shirt on the board to make it super flat. And it was too heavy for me. So that's why I switched to this solution, which was actually acceptable. I was not sure. I did not train the full length of the talk to make it short, but I'd say let's try it, right? Well, excellent work. And I appreciate you being able to participate, like I say. And thank you for such an interesting talk. And I encourage everyone, you included, to do the face-to-face as well. People have posted comments and other commentary in the track too, in the Slack channel. And thanks again, and enjoy the rest of your conference. Thanks.
SPA is about providing an entire app by exposing a single physical web page containing an enormous javascript bundle. It breaks the original web paradigm in many ways. Surprisingly enough, we invest a lot of efforts to mimic the regular web behaviour. Isn’t time for modern frontend to reconsider the SPA approach?
10.5446/54783 (DOI)
Hello, welcome to Alexander's talk called The Plone is Dead, Long Live the Plone. So Alexander is a longtime member of the Plone Foundation, having worked with the security teams and also we've worked together on the board for a few years as well. Alexander, it's a little bit different this time. I did not get your slides ahead of time to look through them to see how the English was. That tends to be something that we do at conferences. I'll let you go ahead and start your presentation. Thank you, Chrissy. Well, the Plone is Dead, Long Live the Plone. Sometimes last year in Ferrara I said I will do a talk like that. And now I'm doing it. So at the beginning I have some license work in but they're under fair use conditions, so hopefully that's okay. I'm referencing a lot of the talks from last year and overall some of the other conferences. But well, what is it about? Last year at Ferrara I had some kind of feeling of realization. The past few years there was some kind of feeling in my side that Plone changes or people in the community changes Plone to something I don't like or I don't understand. But actually last year I really realized that there is a different thing. So Plone is something else for everybody, especially for us in the community. We all have different use cases, different mindsets about the thing we want to work on. And what I'm really was realizing that I or myself was wrong in a lot of parts. Thing is language is difficult. As I said last year language is important. It's more than syntax and semantics. We have a specific meaning. And as we're talking about technologies, technology are complex and maybe complicated. And to build a common sense on understanding for that technology is even harder. And for me as an information scientist and a manager, it is a bit more complicated because I'm very language sensitive. So maybe not always language aware but sensitive. So if I hear the wrong word, it's sometimes hard for me to see if we're really talking about the same thing. So most of the biggest problems in software are problems of misconception. So the question is what is Plone for you? And we do have that fantastic video from abstract IT from Sorrento that shows something about that. What's your name? I'm Matt Hamilton. Yeah, where are you from? From Bristol, UK. What's Prone for you in two, three instance? A great community. So I think the Real Space State also we would like to've the global economy państ let's seeheits what express Mils KNOW and Ahки but I'm Alexander, from Romania. For me, it's a way of life. I'm Alexander, from Romania. For me, it's about what I do, what I like to do, what I do as a citizen. I'm Alexander, from Romania. I'm a member of the family. I'm from Austria, from San Francisco. Although I no longer do play, I still do sites to play my way. So you can take the man out for play, but you can't take the play out for man. What do you have? What do you have? Jamie Lenton, from England. A phone for me is this magic box of tricks that can do so many things straight away. Malé is from South Catalonia and I think it will be my friend. Thank you. This video from Extrac is really fantastic and shows some of the different voices from the community. I want to ask you what is Plone for you? I've asked Chrissy to add five poll questions and if you would answer it would be nice. My first question for you is what is your connection to Plone? The second is what is Plone for you? What are the core ideas of Plone? What are the core functionalities of Plone for you? And as we always compare Plone to other things, I want to ask you also who do you think are the competitors to Plone in your point of view? I want to go ahead and say what is Plone now or what is Plone in 2020? We do have fantastic presentations from the Plone conference 2019 and before. And as I said at the beginning, for me it was like a feeling of realization there. So I want to show you some of the slides or the concepts presented there that really impressed me. First of all, what is Plone? That is how the Plone community page shows it. And Erika Andre said this, Plone is the software, is the community, is the foundation and it's not the same thing at the same time. So Plone is something special. Plone is a software. Plone is an API. But more important, Plone is a community and Plone is a foundation. There was, I think last week Kim has done a new podcast. So from his podcast use the Plone Connection podcast with Erika Andre. There was two quotes that's really important for me. Plone is an API, but there are more than one API. It's Plone API for the internals, it's Plone REST API and the Plone UI and none of them is complete. And the other thing is every Plone company has a different point of view of competitors. And that's because everyone focus on a different thing in the product they're using with it. So they have different use cases, different competitors, different markets. And that is something that makes Plone very special. If you're looking into the slides of Eric Steele, if you're looking at the Plone now, the mature open source, Python CMS, Plone is the community, and there are the foundation viewpoints, the community viewpoints, and Plone as the API contract, where he said the four key elements are security, flexibility, extensibility and user experience. And that makes me to the thing that Victor said, what, no really, what is Plone in 2020. For him, it was the content types, the permission, the workflow, the hierarchy. And the overall question, what do customers and users want nowadays. And Plone can be very different, the Plone Classic and Volto with Plone, rest API as a backend, Plone with Volto with the geotina backend, or Plone with other front ends or with other compatible API back ends. Because the one thing that really stuck to me was the sentence implementation may change over time, while you don't. And just in the presentation before by Victor, he said, Plone has involved from being in a standalone product to be a contract, the knowledge and wisdom that we the Plone community achieve during the last 20 years building a world class enterprise CMS. And yeah, there is a lot of things in various the product and the CMS thing. And Timo described it a bit different. So saying the limit checklist, it's all about the simplification simplifying Plone. And he has two major target audience developers and users. And the developers, it's the thing about simplifying the technical stack using standard technologies with the rest API plus react or other JavaScript front end types for the user. It's about reducing the cognitive overhead, pure content types, composite pages and everything like that, making a smoother, smarter editing experience. I asked myself, what is was blown for me. Well, in contradiction to Erica, I've no three points I have five points on my list. It's blown the CMS. It's the framework toolkit we have to use in it. It's the vision behind Plone. And most important, it's the community and the foundation behind that. Plone is the thing that makes Plone really, really special. For me, blown is a vision. It's not the contract or an API. It's the vision to empower users. But what kind of users we have. So we have the customers we sell a product, we have the content consumers working with the blown side as an anonymous or as a locked in reader. We have the content creators, we have the power users, the editors and chiefs that do some more stuff. We have the Plone site admins, we have the soap admins, we have the server operators, the integrators and developers, all of them we have to take with us. And what is blown. So back to the API point of view. We have one API as a contract of the core functionality. Yes, there is something in the past I always. Oh, so dude, so I'm marketing about blown and I always said, blown is a content management system. Well, Victor said, the young don't know what a content management system is anymore but well that's one thing. The other thing is blown is not web primer. The other thing is more a content integration framework on umbrella for lots of framework that lets you allow to do a lot of fantastic stuff and integrated with blown to show different content, different ideas in one side. And actually, I was wrong in some way, because if we're looking as blown CMS as a product or blown as a vision and the simplification. I made the same mistake that most of the people using blown did. If we look back, blown the product and blown the framework the toolkit is something different. And there is the umbrella of framework and toolkits we use in the phone community world. But that's something different than the product of a CMS. As Victor and Eric said, it's all about values. The implementation may change over time, the value down. We all share the same value as blown. And team or set it with the right image. All ideas were out of other ideas. I like the quote from Isaac Newton. I have, if I have seen further it's by standing on the shoulders of giants because we're building upon the ideas of others about the results and achievements of others. And that is also a thing that blown is Eric described it with the CMF and clone the connection. Actually, he goes in more so CMF blown the API and Volto that is what clone six is or will be and the future is. But if we're looking a bit back in the time, blown was the user interface to the power of the soap see and content management framework. So if we're thinking about the huge technical possibilities that are hidden and buried in this NMI and was not approachable by users, the blown UI gave them user experience and and transparent to that. And we've learned a lot about through the web development. Yes, a lot of the through the web development we've done in the first time in blown one to two five. It was possible, but it was not a good idea so we moved, or we shifted to go on the command line interface, doing the changes in code, everything and not the new set of the be and the blown user interface does still provide the most important points to interact with the system. So, if we're looking in the overall architecture of soap and blown, it's a layered system layers in computer science or information scientists, something very special and nice because layers high complexity. And complex things, even the year they define an API between the layers, they are transparent you can just pull one module out pull another one in. So, in archetypes time there was just one line to change, storing the data and a ZDB a postgres or my SQL database just was one thing. And also the same in some other points for the end user, it does not matter if the content was stored in archetypes or in dexterity. So there are layers in the stack that makes it very powerful because we are not a monolite system that is interconnected everywhere, we can have defined layers, we can build upon and remove or change. That's very nice. And it once a kind of graph of an evolutionary about soap and blown and there's lots of technology around working with. But one thing I've seen there is naming things as hard. And we as a blown community has done something with that when taking namespaces out of the community, not where the stack layer or the tool in the stack belongs to. So if we are looking on a simplified version of the phone six that there's so many different components involved that gives the power to clone and till 2015 we're thinking about blown as the genetic phone packages the phone call package and the user interface, maybe already some of the the rest API and both of ideas. Today, the phone community is larger as we as a salt or simulated the soap foundation and the soap community. We working a lot together with the pylons folks select Steve Percy and the others from pyramid. So the community is even larger, and there's more common packages involved with it. So, the question is, what is blown and where does all the stuff belong to, we cut not work without those. But on the other hand, you see one thing. We are the blown community. We are not the soap community. And they were the one different choice it was about the values ideas and vision. And that's what we do in the community. We work at xlini and run and really did a fantastic stuff, burning up a community that is in parts different than the soap foundation. And there is more the focus on the values in it. Let's go back to the API. So if you're thinking about software as an API, you always have the API problem, if you're changing it or doing a new layer on top of it. What is the API needs to be do is find. Normally we go by the Pareto principle, so that 80% of this stuff can be done in 20% of the time, and is the benefit things for everyone. The last 20% takes 80% of the time and everything. So it's about defining which functionality we really want to go in the API so which contract is it. We're reinventing a bit the wheel in it, but the other problem is, we never have a 100% feature compatibility to the old system because they are undocumented features there are features that will never go into the API. It's also a chance because we want a deprecation of some features that we have seen are not good for the best practices for the community, for the developers and everybody. So, there's a different thing. And as I always said, the goal of law about a complex system that works in a very profound have involved from a simple system that work inverse propositionist also appears to be true, complex system designed from a simple system that never works and cannot be made to work. You have to start over beginning with a working simple system. That's all about. So, we have a working set with all the clones OBSCMF stuff in our stack. And putting a layer on top of it to make it more simpler to come an approach and computer science do work like that. But on the other hand, one size does not fit all. We have seen that people want specific use cases highlighted moving out doing something else. So, clone and soap is different than the Athena and pyramid and other frameworks coming out of our community like more person song. So there's specific needs. And it's not a bad thing. It's a good thing because if we have the variety and we see new ideas we can adapt those ideas, but not limit us to do an overall monolith system that has to take care of all use cases. So if we're looking back into the idea of the API and Volto. So we have different things so Eric and Victor saying you have to open clone at the back end you have to Tina, you may could have a pyramid back and for Volto, but all of them also if you just want to look for content consumers sometimes get be could be the better solution for reading it. So there's a lot of things. And I want to see one other thing I've reminded that in Brazil 2013 we were discussing about the future of clone, especially then with the Python free focus, but also what will be blown in the future. Volto at that time was not already on the floor, but I forget the name. There was this one quote from Paul Everett that they had one lesson from soap three. So I was in the past saying, Volto seem as is a fantastic name because it's not blown anymore. And I must admit, there was wrong too. Because if we're looking into it. It's all about the vision of about empowering users. It's the content creators and the developer focus. And if we're looking in other systems, all systems work like that, they grow over the time. And then they see they are over complex they making distributions and making their system simplest by giving it back a product to the director. And that's the same we do with the phone community and there is this famous quote from Albert and some any intelligence who can make things bigger and complex takes a touch of genius and a lot of courage to move in the opposite direction. In the beginning that was one of the things that Alex Lemmy was really doing with the blown community had that idea that vision and the capability of simplifying things. And not a quote from the past and the blown community was, it's all about rapid turnaround. This fantastic video by John Kelly from 2006 shows what it is for a developer. It's all about the rapid turnaround to work smooth and smart with it without the XML setup. It's all certain everything. And I've understand last year that the vision lives on the vision is wall to today and Volto is the future of the new I, and you can say it's like in the US presidential election process you have the primaries, and where all the candidates present their concepts, their ideas, and they vote a potential leader, and on the Congress. There is this force to join together to join together behind the new leaders the ideas and the new Austin nominee. And therefore, I would say, I proclaiming that Volto is the essence of the plan vision. And that is what it's all about. And that's why I chose the title of this talk. It comes from the French law, why a more evil law, the king is that long live the king. It's not the traditional proclamation, but it's not the sad thing that there is an end. It's the other thing it's a change, but with the continuity. You have a change of generation, could be like a new hope, because they're new energy, new enthusiasm. If you're looking more into this sports analogy. It's like passing the baton and race. And that's what I'm seeing at the moment. It's that the ideas we have seen, or the leadership by Alex Lemme and I'm running in the first days are now done by the fantastic user experience, design and work from other Casado and the implementation capabilities and ideas and concepts of Victor Alba. So we should join together, helping them, making the vision for Plone come true and help Plone be that what Plone is for us. And Victor really loves Star Trek, me too. And for me, it's like seeing its new generation, the implementation that changes. And it's not the values. We are still the federation, or like we said, the Plone collective. And it's all about that changing our base, but we work with our still the same values and passion. It's all about leadership and vision, what we have. But all the leaders are not nothing without the good people behind them, the people that gives the technologies. But we are not only about the rock stars in the community. It's all of you, all of us. The Plone community is an awesome community. We all together work and make the Plone thing happen. And I really want to appreciate. And it's all about also an acknowledgement. I really want to thank all of the people from the past, the present and the future that's working for Plone. And I want to thank all the active Plone community members. And all about that. Things do not happen, things are made to happen and changes to that leadership and we need to follow that. So beside the product Plone, there's the other side, the foundation. The Plone foundation with the mission to protect and promote Plone. And I think the whole thing is, Eric said last year, what the Plone Foundation, both of the director does the decision making structure for essential community activities, managing copyrights, trade marks, domains, intellectual properties, code license, marketing and communication, fundraising to support the community, finance, print, event, community infrastructure. But on the other hand, there's a lot outside of this group of the foundation. We are neither lead or steer the development of the software. That's things of the community teams like the framework and release team, but the roadmap priorities and features that comes from the community. The foundation has license to some teams, but it's only about communication and giving them if necessary the financial support to do it. The other thing is to compete with Plone, Zope, Geutino, Volta providers. We are not about support contracts, licenses or trainings. It's about the community that the foundation works. The foundation always needs you. There is on Thursday now the annual meeting of the foundation and I please everybody from the foundation members and not foundation members to show up, just see how awesome this community is, what they've achieved and done. It's always, we need you. We need you for the Plone Foundation and the community. If I can say, I have some wishes about the future of Plone and what happens is for the Plone, the product, please everyone endorse Volta. It is the vision of simplifying the CMS work. With that, as Timo said, we can attract new developers and keep the community vital. The other thing is, as I'm more and more doing, if I'm doing Plone stuff, things in the Zope and CMS stack systems or even lower things like restricted Python. I would love the idea if we can move packages or modules that are generic already into the different or in a different namespace to show it's not the element that only works in Plone, it works in other systems like Zope interfaces. Everybody thought it is buying to Zope. No, you can use it everywhere. For the Plone Foundation, it's about secure the product framework and toolkit. And one thing I've seen as I'm presenting or prepared the simplified stack, we do have a large overlap with the pyramid pylons family. And I would be glad if we could absorb also this community working together. We share the same mindset. We are working with the same technologies. It would help to protect our product, but also give our financial and working efforts into that community and to the project to have them involved. We all need waitress and other technologies from the pylon stack. And for the community, especially this year where we cannot be together in one conference room or one conference location, stay connected, be together. And we can learn from each other. And it's not just sprints, all of it. It was learned from each other. You may think the others are way smarter than you. But I can tell you one thing. Even the smartest person in the community. We have two stars in our communities. We have one community. We can go share beer or a drink, joining together, talking. It's not like, ooh, this rocks. No, it's, I still remember the day of my first conference 2010 and Bristol, where Alex Lemme and Ronion and some others said, Oh, you need to come here. Thank you for a beer. Let's talk. It was so fantastic and that's a spirit I wanted to keep. And on the other hand, you see lots of person moving, but loans were set in a very fantastic way. You can take the man out of the loan, but you could not take the plan out of the man. We stay a community. That's what it's all about. So for me, it's just to say, live long and prosper, stay safe, stay secure, and hope to see you somewhere around this here at the conference at the next blown open garden, the main blown conference, one of the many sprints. Thank you. And maybe we go directly to the question and answer in the JITZI room. We will in just a minute. I want to thank you, Alexander, for your talk. And also for all of the work that you have done for the community yourself too. I have many specific questions in Slido, but there was plenty of conversation in Slack, so make sure to check that out later. But for now, we'll go ahead and underneath the video inside of Loud Swarm, you'll see a join face to face button. And jump over there now and you can talk to Alexander. Thank you.
A talk about the essence of Plone. What does Plone mean for somebody? Plone is an awesome combination of vision, software and community. But what defines Plone, what is the essence, and how does it change over time. At the Plone Conference 2019 I has the feeling of realizing why we sometimes speak of very different things and why the Plone of today is not the Plone we know from the years. This talks wants to summarize this feeling of realizing, explain why we may have a problem of misconception. It is a mix of technological questions, the overall vision, branding and community topics in depth.
10.5446/54785 (DOI)
Okay, can everybody see that? David, is that look visible to you? Well, hopefully yes. Hearing no feedback. But you're good. Okay, great. Great. So we're going to have a panel discussion today. Excuse me, I'm gonna pose a series of questions. Sorry, the first of which is this. And we're gonna have a little discussion. And at about 35 minutes after the hour, we'll hope to have some have about 10 minutes worth of questions if people have questions. But we want to reserve like the last five minutes of the talk for wrapping up and sort of discussing next steps to take. So that's the scheme. And David would love your help keeping us on track with that. We're going to try to have about 15 minutes for this initial discussion, and then about five minutes for each discussion, each little question topic after this. So timer set. Thank you. So the first point is to discuss the sort of relationship and positioning of kind of class editing the classic pages on Volto pages. So the internal representation of like tiny mce pages, straight up pages, mosaic pages and Volto pages is different, not surprising, but not something I had realized or thought about or very much before a few months ago. So the question is, is this a problem? Is there a solution with one representation and one tool be possible? If we need three page editing solutions, how do we position them? This gets into sort of what is clone question. So and I really, I'm not going to be very able to like call on people who raise their hands because I can't see you very well, but take it away folks and just dive in. So unmute everybody and turn on your videos. Well, maybe to get things started, Paul could say a few words about sort of the, Paul is here to sort of represent the user experience side of the equation as opposed to the developer side of the equation. We'll be talking about some very specific use cases later, but maybe Paul wants to say a couple things about just how this plays in terms of marketing and positioning flown and all that. Yes. For me, it does represent kind of a problem because it is a question of when do you switch over your site? Can you switch over your site? Is there a pathway to have your site that is based on clone five or on clone five plus mosaic turn over to Volta without redoing all our 700 plus mosaic pages? That would be the third time we would have to do that from various previous incarnations of composite pages. And also there, yes, so it presents a problem in the sense of where do we stand? Do we have to, what are upgrade paths? What can we do? Can they be combined in one site so that we can practice a little bit already with Volta or do we have to give up that idea? So for me, it is a problem and it is a question. Don't everyone speak at once. Oh, I guess I could ask at least part of his questions. There are many questions, right? So we have to go step by step here. But I think the question that I might be able to answer best is the migration story, right? Because we are using Volta since like three years and we migrated quite a few large projects from classic clone or even other systems to Volta, right? Like a 10K person internet. We just migrated that and one of the websites of one of the highest level government agencies in Germany. So there were quite large sites and one of those sites actually had collective cover, right? They had collective cover and the other system was a completely different system, also with page composition capabilities. The problem in general with all page composition tools, no matter if it's an external system or if it's like cover, Mosaic, whatever or even Volta, right, is that they're pretty specific. They solve pretty specific use cases and they also come from different eras, right? So if you take Volta, it solves the layout problem in a completely different way than cover bit, for instance, right? Or Mosaic, they have this idea of those tiles, right, that you can arbitrary the range items, right? And when we started Volta, we realized that this is not the way modern websites look like. And we realized that modern websites are more block based, right? Because of mobile design and restrictions, you have those blocks, right? And Gutenberg takes this approach as well, right? So when you migrate from cover or Mosaic to something like Gutenberg or Volta, right, you're you're migrating from one world to another, right? And you have like quite complex implementations actually, right? And they differ quite a bit. So what we, but we have the problem of like having this 10k user, a few hundred thousands or million document internet problem, right? We have to migrate that from one system to another. And then you have far more than 700 Mosaic pages, right? So we tried really hard to like solve that problem because it's hard to solve manually, right? And we put a lot of effort into that. But the problem is that even if you even say you succeed, right, you are able to migrate all the data from system A, whatever that is, if it's Mosaic or whatever, to Volta, and you're able to do so, what you will end up with is basically a page that will look ugly, no matter what. I think that's a problem that computers can't really solve, right? So the best thing that you can accomplish is you put a lot of effort into it because of the technical complexity of the system. And what you end up with is something that you will have to go over anyways. That can be a helper, right? For sure, it might be better to start with something that does not look right than with something like completely new. But our experience from our project is that you want to touch those things because there's a reason why you're moving from the old system to the new system. And then most likely it's because the old system doesn't allow you to do what you want to do and then your pages look ugly and they look old, right? So you want to go over them anyways. And for 100% of our projects, we decided at the end that we are going to redo at least the important overview pages, right? And we wrote a migration algorithm that's able to migrate the entire thing and then you can basically cherry pick individual elements and the client can create them, right? So you can have a combination of automated migration plus the client starting to override third pages, right? And then you also have the problem that you're linking between those, right? So when you're over you page, 90% of your content is a teaser. And then you have to link, right? So that's an incredibly hard problem that we're talking about here on different levels. Maybe we should let somebody else jump in here. And just to say something I should have said at the beginning, let's try to keep this talk kind of at the conceptual level and not get too into the technical weeds. So yeah, does anybody else want to just add and maybe a brief? Sure, Philip, go ahead. Okay, just I'd like to move at least these parts away that we know that we have so that there is no uncertainty about those. So when it comes from migrate to migrations from whichever plot version to plot six, we have the migration of all schema based default content. We have the migration of item ish to folder ish content. Still has to be merged, but the code exists in the in the my old folder ish clip branch, but it's not a technical issue. This will be done. We will have a migration of the side route from from from old side route to dexterity side route. We have a migration from Python two to Python three from archetypes to dexterity, nothing there is new. So all the content that is in a schema will definitely be migrated. And what we could do, for example, for pages that don't have a lot of mosaic, because I this is the only place where I see a real challenge. And I have to tend to agree with Timo that it is it's a non issue, because you will probably have to touch these anyway. But disregarding mosaic, everything else can be safely migrated. And the problem that documents in the plan six will be automatically enable have automatically enabled pasta naga editor could be fixed by giving them a marker to not use that and just display the schema based document. So and starting from there, moving moving on to tiles or moving all the whole thing in one in one big block, there's various solutions. So all the normal content unless you have mosaic, or any other composite page creator is easily might not easily probably that is that is overselling it, but is migratable and you will not lose any of that. What will you will lose is probably portraits on they unless they will be implemented. But everything else is solved on a technical level. And most of that is already in existence. Excuse me, let me just jump in here. We've been mostly talking so far about how we can migrate content to Volto, which is great. I'm really glad you guys are working on that. But we're really the question we're posing in this in this slide is, do we need all three. And I hear you saying, I think, yes. I mean, I know that Volto is the future, but we have quite a few classic sites out there that are not going to be moving to Volto anytime soon. So there's this rather long transition period. And we need a way to position all these different ways of editing pages. Does anyone have comments on that? I think in a roadmap, we committed in a roadmap, I think we committed ourselves to to to support the what we call called Plon Classic for quite some time. There's no no plans yet to cut it off in any way, right? So it will continue to be there. There's no we will not force people with Plon 6 to jump on Volto, right? And if you have a large existing site, and you're not willing to maybe redesign or anything, there's no no reason that you that you should jump to Volto or that you have to, right? Usually it's a good like the same was true from Plon 4 to Plon 5 when we read it like the front that right that's pretty similar to now to jump from Plon 5 to Plon 6. But if you do redesign with your client, and you you can add value to your client, right? Then that would be the point where you should consider moving to Volto. But if you have an existing site and it's large, there's no reason for you that you have to to jump on it right now, right? So you can can can wait a bit until all this migration, the entire migration story is ready. And maybe people came up with the migration from Mosaic to Volto, right? That's that's possible. So I don't think right. So you're you're emphasizing that yes, we need these different representations for now. And there will be this overlap period. I don't know, does anybody have any thoughts? So as a sort of as as providers who are not necessarily all, you know, on the same at the same stage, how do we how do we talk about Plon in these times when there are these very different ways of doing things in Plon? Yeah, I think that the three are will be needed and there's no need to unify them or whatever. We only need half those these tools, this set of tooling that we should to provide ourselves with. And then if we make a migration or something, then we could easily as Philip said, the classic schema to Volto will go right away because you don't have to do anything, even for rich text fields, it will work right away. The only thing for pages, we need some kind of fallback if the blocks are not there, but which could be easy to do also. And then someone came up with a way for migrate Mosaic data to both the pages. But as Timo said, it's nothing that is straightforward, right? Because you need to have in both sides the same as many tiles or type of tiles that you have in Mosaic, you have to do or re implement them or remap them to existing blocks in in Volto. And then the outcome won't be the same anyways. So I can imagine a number of ways to overcome that like I don't know dump Mosaic pages into HTML, then load it back in Volto blocks as they were HTML, right? And then a way to split them up if you need it and convert them into proper Volto blocks. I don't know. I mean, implementations are bad, I think that it's definitely doable and there's no need to unify them or anything. And yeah, we'll have to live with the three implementations. Yeah. Okay, let me let's move on to the next question. I'm hoping to rope Eric in to say a few words at some point. So, so here's a question. If you have a big classic site, like for example, a site that has highly structured content types, not a lot of pages where the sort of block editing thing is is is that useful because the pages are already heavily structured by the content types themselves. But you want to kind of take advantage of all that data and maybe have a nice react little app sort of thing as part of a sub site. How would you do that? I think it's definitely doable, although we haven't tried it yet, I'll say. But it's a matter of where you point your Volto site, right? And you tell Volto where your API is. And then if you do, you made available that piece of site in a virtual host, and then make Volto feel that that's the root. I think it will work. Right. But the pages in Volto, what would they look like to an editor who went into them from classic and vice versa? Exactly, then you will have to go the same dance that we said before. Right. So if you want all classic, then you will have all classic with no Volto blocks. And if you want to go all in with Volto blocks, then you'll have to do something to transform what you had into Volto or start from scratch. Maybe this use case, you could start from scratch with Volto in a sub site or in a section of your site, right? And then have nice Volto pages in a newly created section. I don't know. I can kind of imagine that. Just wondering about the user experience for that, for editors. I mean, has anybody thought about how to... I don't think that's, it's a good idea to actually offer that. It is, I don't see any upside there. If you have old content that is unmigratable, just split them into two different applications with shared authentication maybe. But there are so many technical issues if you try to link from one content to the other using the relation widget, for example, and when you end up then and the user expects the same experience, if they click on a link, the design would look different, the UI and the editing would look different. I think this is a non-issue. I still have, I would have a hard, very hard time coming up with the actual use case where I would want to do that. And I think we as professionals are able to convince our clients that it's not a good idea to do that. If they want a Volto site for something specific, they've got a Volto site, a PlonSIG site, and if they have legacy content, they get that. And legacy, I mean PlonSIG is an LTS Plon Classic, is not outdated. It's going to be awesome on this website. So in other words, you guys are saying don't do this. It's not supported, basically. I say that. I would say you don't have a compelling use case where you can benefit from that. Right? Because what Philip said, I mean, why would you want to move to Volto if you don't use any of its powers that it has? And if you don't want to use the page component tool, why would you want to move to Volto in the first place? Right? I just ask if Paul has anything to add here, because I know this was one of his sort of... Yeah, the one of the use cases is if you have a large site with many departments, where a lot of the content won't change that much anyway, but there is a sub-site, for instance, the marketing department or the... Well, name any university that hangs under the entire university, but you go to university slash physics, and the physics department wants to have snazzy new Volto things. And they don't link much to the rest of the site anyway. Right. So anyway... I would go with a separate page then. I mean, in general, that's also to like putting multiple sites, right? Why don't you just create another site and connect them to the LWAP, and then you're done, right? Then you go to your separation, and no need to... Like, to interfere. I mean, that's possible, but why would you want to have that burden? Possible reasons might be if you have these very rich content, structured content sites, and you want to take advantage of some of that content in this sort of appy thing that you want... Then you want a different view on the same content maybe, and you can do that. Then you can use all the power of the rest API in Volto, or even the headless, like, create your React or view or whatever application to have this super nice view on your existing content, right? Then you can use Plom as a headless CMS basically. You continue to use your Plom 5 user interface. You add stuff there, and then you show it in a fancy way, but it's hard to imagine to show rich text in a fancy way, to be honest. That's a chicken-act problem, right? Did you guys just lose my video? You did. I have no idea what happened here. It looks like my Google slides just crashed, so I'm really sorry, but why don't you continue the discussion while I scramble to get the slides going again? I think we should move on to the next topic, which because I can't see it. The next topic was Upgrade, but we already sort of covered that. Yeah. Let's move on to the one after that, and I will start my presentation again. I'll just read it, Cost and Benefit of upgrading a big classic site to Plom 6. This is the situation where the classic site is staying in classic because it's highly structured data or whatever. There is some reason to do that. Plom 6 is the LTS version, not Plom 5, and we've just had a quite expensive migration of our site to Plom 5. What is the cost of now doing the migration to Plom 6 so that we get onto LTS, and what is the benefit for an organization that's staying in classic? You want to take over that question? I could only answer the discussion that we had when we discussed that in the Steering Circle or roadmap, but not specifically about the technical stack of Plom Classic. Yeah, this is not about the technical stuff. This is about you have a client and you want to tell them that they need to upgrade. What are you giving them in exchange for the cost of that upgrade? What is coming on Plom 6 that will be valuable to them? Okay, two cents from me. We're having this discussion every major upgrade of Plom. Why do we need a major version? Why are upgrades expensive? My answer is always the same, basically. Communicate every upgrade as a relaunch. If there is no technical hard reason, like, I don't know, for example, still running on Plom at Python 2, the relaunch is the reason for the upgrade and every new feature that you get is the benefit for the client for that, and the new design and the new site structure and the new editing experience in the newest TinyMCE, because we're going to update that as well, as far as I understood, Harness at least. I can see lots of benefits in having a Plom 5 2-side migrated to Plom 6, but if I talk to my client and say, yeah, there's a new version, that's why you need to upgrade, this is not going to fly. This did never fly. You always have to, it's a sell unless you're a technological addict and you need the newest version every time, like me probably. I agree with Phillip in the sense that if you have a cost, then you should try to sell your client something that's valuable for them. So I fully agree on that, that this is the right strategy. Though I think I disagree on that distributed default. I know it has been with the default for quite a while, but I think that this is really killing us in the long run, because you can't always tell clients, you can't combine it all the time. Sometimes you just can't, and when we make a major release, we should give the client something in exchange. We became a very developer oriented community and every developer understands the value from Python 2 to Python 3 or doing updates. That's totally clear. We don't have to discuss that value and we don't have to discuss the value of the migration from 2 to 3, for instance. But for clients, this is something that they don't understand at all, this cost. I think that we desperately need to get back to making major releases that add the real value for clients, that you don't have to find this relaunch thingy, which is a good thing anyways, but it should still work without that. We should make releases that have so much value for clients that they want to jump. They come to us and ask us, hey, we want to, we saw those cool new features, we really want to upgrade to clone 6. I think clone 6 is the first release where we can provide clients a tremendous amount of new features and functionality. This is for me just the first step. We have to go from there. We have to get back to a situation where clone releases sell themselves. It sounds like we have some agreement that clone releases should offer real value so that people really want to upgrade to them. But we also have the, it seems like the position is that the big value of clone 6 is Volto. For clients that because of the structure of their sites, really that's not an option to move to Volto. There is not going to be much. I think there's a bit of a problem with the question here of like we're saying moving to clone 6 with Volto is a high cost upgrade, but it's worthwhile because our clients would want it. And also like moving to clone 6 and using the classic, what's the benefit of that? And the migration from clone 5 to clone 6, keeping the classic should be relatively minor. Most of the upgrade costs that comes with moving to clone 6, Timo I know I agree. But we're going to be discussing that and we'll try to figure out a way to make that work. But yeah. But the moving to clone 6 with Volto, there's an obvious benefit to me at least in that sales part of things. And I think, yeah, if you're, if you want to sell it to your clients as a, you know, here's the cool new stuff you get. That's a Volto upgrade. And if you want to do the lower cost stick with the LTS version, then that's still available. Yeah. Well, that's great. If it would be a low cost upgrade in that case, it would be great. Okay. Let's move on to the final question before we take audience questions. And I want this to be a really, really brief, brief answers from you guys. Constrain yourself. So remember Alex Salimi's vision for Deco, the question is, are we there yet? Was that fulfilled by Mosaic collective cover by Volto or Volto plus the great new stuff we've been seeing that eat that Oda web has been doing? Or do we still need something else? And please try. I'd like to hear from everybody. And I want you to keep your answers really brief because I want to start taking audience questions here. So I'm going to just go in the order. I see you in my, in my zoom thing. Paul, you first. Not quite, but slowly getting there. And by which technology by Volto, you mean? For me as a user, it would be Volto complimented with the great stuff that we've seen from EA. I want that power. I know others think I shouldn't be trusted with that power because I will make horrible sides, but I want that. Okay. Timo. When we look at what Volto core right now, we're not there yet. But when we look at the page composition tools that we have, that Oda web has that returge has and that kid concept has, we went far beyond the wildest dreams that Alex ever had, I think. And we updated it. And it doesn't matter if getting to have a common vision on that and bring together the different implementations. And then we're there. Okay. Eric, you next. Yeah. I think what we've seen out of Volto this week, the whole, the entire Volto stack is closest to what Lemie envisioned. I will point out that Lemie also hated variables in CSS. So we can't go by everything that he promoted. Okay. Victor. Yeah, I think also that Alex vision back in the day was, has to be updated to today's sites requirements and the trends that are today. But I think that we're very close to that. Right. And what I will say about this power user tools is be aware with that because yeah, you give power to the user, but the user should be, should be also aware of what power entails and don't do, don't allow the users to ugly sides and follow us. Power to the people, power to the people. So yeah, we went off that tool. But yeah, I guess that every site and every user has it. Yeah, it's a user case. And then having all those myriad of tools available to make layouts, it's, it's nice. And it will get better. Okay. So just a really quick word from Philip on this question. Uh, hard yes, is it's always been the heart of clone to empower users and Volto is getting much closer to that and not only the tinkerer, but actually the real editor. I love sec classes, but let's face it, that's not for a normal user. And Volto definitely is for a normal user. So yeah, we're definitely getting there. And I'm really excited that Volto is finally delivering on these promises. Okay, great. Let's move on to audience questions here. And, and I'm going to start reading from what David has been feeding me in the zoom chat and interrupt David, if there's something I should be doing other than that. So the first questions from Eric, is there a plan for Volto to support editing the existing rich text fields of documents and news items? They already do. Like if you migrate your site, you can add it's, it's not tiny mce, but it's a rich text editor in Volto. And you can migrate the whole site. You just don't have the block editor. That's a different because the storage layer is different. And it would be, as we already discussed hard to write an API that supports both. But all this data is all there, all your and reiterating what Aline said, schemas for the win. This is the best schemas are the best thing that we have in plan. They give us immense power and flexibility. And if we, if you have structured content, you're good in Volto. That's fine. Okay. So I'm seeing general thumbs up on that from the other folks. Anything else brief to add? Okay. Let's move on to the next question then. And this one is from Rafael. I think it's a little less of a question than a comment. Let's see. I think mosaic approach to a layout editing is much better than Volto. The fact that you can drag one tile next to another and adjust the width is better. Having three implementations is bad. And is what turns people off from using clone. Migrations to newer versions become a project instead of a minor update. So any comments on that comment? Kind of, we've kind of talked about this, I think, but if anybody wants to specifically address that. Just quickly about migrations. We, we, yes, we need to work on my making migrations even easier and five, one to five, two, if you include the migration to from the exterior architects to dexterity and Python two to Python three was a really, really tough one because they actually had three migrations in one. But that's not going to happen unless you migrate to Volto. Then you actually have to either rewrite your views in as Volto in as react or use the layout tools where you actually can use all your metadata in your templates and then you won't have to write them yourself. So I'm excited about the chances that this will give us to not have to develop browser and not browser views views for everything. But so yeah. Yeah, just to build on that. I mean, yeah, so five to had a lot of back end migration or back end. Yeah, upgrades that needed to be made to move from five one to that. And as we're just unfortunately unavoidable if we wanted to survive as a project. But I'm fully bought into the idea of separating the front end and back end, having that API layer in between because I think that definitely makes that migration stories substantially easier. We're not having to reinvent the front end every release because, you know, the front end is able to keep up with the technology. And I just think that's the best way to do that. And I think it's going to make life better for everybody. Does anybody have any comments on the question or the comment that Mosaic has superior features because of its sort of tiles, dragging the just width approach? Is that something that? Stuff also provides if you want that. So I think that's solved by now or in the business of being solved. And there will always be different opinions on whether that is useful or not. But it's more than good enough for my outlandish needs. And should be good enough for everybody. Does somebody want to talk about the sort of the roadmap for integrating those changes fully within Volto? What is the plan for that? Is that going to become mainstream Volto or is that going to be an add-on? That depends. I mean, the thing is that we have three different working implementations right now as I said. And we compare them within the Volto team. So we're working closely together. All the companies that do Volto these days, we have similar use cases and we found different solutions to the problem. And we compared the implementation. And it's clear that the EAA implementation or from other web is the implementation that has most features. And KitConcept and Returtal, we put more focus on the usability. So most likely users like Paul will be happy with the EAA implementation because they have tons of features. For us, we have a different idea for Volto or a different for idea that occurred to us, to Victor, me and Rob. Actually, when we implement our editor and we had problems with it, and then we had to look at Gutenberg. And Gutenberg was published as JavaScript. So we thought, why don't we just take Gutenberg? Because it's just there. Somebody else takes care of that. We have the best, like, the most feature-rich editor in the universe and all these add-ons. We can just take it and put it on top of Plone. And then you have a WordPress with Gutenberg, but a secure one because it's Plone. And at some point, it occurred to us that we never want to do this because I think the unique selling point of Plone was always that we have a system that's really user-friendly for editors. That's really easy to use, that keeps the training costs low in large organizations. If you have to train thousands of users and to give them Gutenberg, you will have a really, really hard time. And if you want to have a conscious corporate design and you don't want to allow people to add a huge red banner with green color on your website, you have a really hard time with Gutenberg. And we always thought that this is not the USP of Plone. The USP of Plone is to provide a system that large organizations can use, that is secure, that allows you to have a consistent and professional corporate design, and that's really user-friendly so your training costs are really low. So we don't, our vision for Volta or for Plone is not to have the most feature-rich CMS editor in the world because others are way better than we are and it will always be. So this is not where we want to join. You can always hide complexity from users. You can always disable features. And I think you need a pillow fight between these implementations and we can't live with three implementations in the long term. We need to find a stream, not streamline. One solution that is extendable and can be, also, I love the idea of stuff being disabled. We need to... That blocks, that's exactly what you need. And build your own implementation on top of that. That's what we did. And all the blocks used the same system. So we have this powerful system where you can build whatever you want and whatever you prefer. And this is already the case. We don't have three different implementations. We have three different ways of... Or three different flavors, you might say, but they all use the same basis. They're all the same, basically. It's just different blocks that we implement. So I think that's a really interesting conversation and possible slight disagreement, I'm not sure. And I'd like to come back to that in a second when we get to our summing up, sort of what are we going to do with for next steps. But before we go to the summing up next steps, I would like to just squeeze in one more question, which was posed by Fred. And we need to keep these answers really short so that we can get to our summing up. His question was, what's the future of using content types slash behaviors in one to design an information architecture with Volto? The trend seems to be mixing slash adding 40 different blocks for every page. I mean, quick comments on that. David, do we have 90 seconds until the end of the... I thought that the session ended at 9... at 50 minutes after the hour, or does it actually end now? Sorry. No, we have a little bit of housekeeping related to a second group photo, so I'm kind of keeping my eye on that. Okay. Okay, so anybody want to say just something really quick about Fred's question? I mean, really quick. Yeah, so the blocks are definitely the new layer where you actually work, where you get the flexibility and all the power of Volto. That's definitely the case, though you still have all the power on the underlying systems like behaviors and content types and everything that you want. So it's essentially your choice on the site, whether to go the content type, structured content type direction or the... It sounds like the EEA stuff will help people who make that. I think we need to be able to have blocks that are fields from a schema or a behavior. This is... I think this is unavoidable to get this done, because we have... Plone is strong with structured content and displaying structured content in blocks makes total sense to me. Okay, so... There are still technical issues. Let's just talk about the next steps then. There are certainly some things still needing some thinking and planning and etc. What are the next steps? Obviously, there are going to be sprints, but specific to some of the questions we've raised in this conversation about the three different implementations and the migrations and the this and the that. Sprints, open spaces, committees, how does marketing team? I'm on the marketing team, so I am particularly interested in how we communicate about it. Like very quickly, do you have plans? Maybe Timo can speak to open space plans, sprint plans. Is there a need for some sort of team approach to working out some of these things? We plan to have an open space on the page composition stuff and on the on on on Volto on the Volto roadmap and we will also sprint on Volto for sure. In this weekend? Yeah, okay. Anybody else? Paul, do you have thoughts about how you'd like this to move forward? Paul's representing the user perspective. That's why I'm calling on him as a representative. Yeah, for me, it would be good if there is at least a longer term vision on. I would rather have a power that a site administrator can lock down so that it then becomes not possible to do that anymore than that I as a client, as a customer, have to decide which one of the things I need, which one of the three implementations I need to do, I would much rather have like, do you want small, medium or large or and or do you want simple or expert view? That helps me a lot more. But that's just because yeah, that's me as what do otherwise. All right, just like you guys know that we're wrapping up. Okay, I don't want to choice stress. All right. Well, it sounds like the conversation will continue in the open spaces, opportunities to contribute in the sprints. And yeah, that's the plan community for you. So thanks everyone. Over to you, David.
It may be a surprise to non-technical people to learn that pages created in Volto are not currently interoperable with traditional Plone's page editing. If you think about it, the reason becomes obvious. Volto, like Mosaic, creates tiled layouts, and like Mosaic it stores page data in special fields for the individual blocks and their layout. Neither Volto nor Mosaic pages are editable in TinyMCE, which expects just one rich text field. Is this divergence between sites created in Volto and sites created in traditional Plone a problem? It does make it harder to describe what Plone is, and it might mean that there is no way to mix both approaches - for instance when part of a larger site is available as a Volto-based sub-site. Would it be possible to have one tool and one representation for tiled layouts so that we can avoid this divergence? Is there some other solution? Is it even a problem? Will Plone 6 be backwards compatible with Plone 5 and include a smooth upgrade path? We will tackle these questions in this strategic panel discussion, moderated by Sally Kleinfeldt. Panelists include Paul Roeland, Philip Bauer, Timo Stollenwerk, Victor Fernandez de Alba, and Eric Steele.
10.5446/54786 (DOI)
over to you. Thank you, Kim. Okay, I'm sharing the screen. Okay, is it okay? It's starting. Yes, it's good. Okay, perfect. So hello to everyone. I'm happy to be here missing to hang out with you, maybe, but I hope it will be a nice conference anyway. So, timing Volta without semantic UI. Is it possible? Well, the answer is yes, and we'll discuss the process of making that possible. I'm Nicola, I'm a frontend developer, as Kim already said, and I'm proud to be a member of the Volta team following the development of Volta from the early beginning. And first, I'll introduce you with a bit of history, because Retartal has always worked with PA with public administration, and before 2017, each authority website had his own design and team. As you can see, any website was different in design and structure, and it was confusing for the users to find content upon different websites and platforms for the same services. So, back in 2017, the government started Agid. Agid is the Italian agency for digital transformation, and its main goal is to create standards for public administrations, websites, and services, publishing guidelines, and common rules. These new guidelines, which are actually a design system, are made so to unify the design of public administration websites. In working with public administration, Retartal developed and blown five Diazzo-based team following the design system, and we built a product on top of that, named Couté, and it has been used for a lot of clients in the last few years. So, this is the new design we've standardized for our clients and public administrations all over the country. In 2019, a new version of this guide, Lens, has been published, and it changed drastically the entire design. So, we had to reimplement a new team, and we chose to adopt Volta. The Agid guidelines are made to be unified from any website for public administrations, and have common rules for content types and tree structure of content, and consists in a design system, which is general, but it is also customizable for any website. So, any local administration can have their own visual identity. So, the team we have to build is something reusable and extendable. Each client has its own colors and logos, but the base is the same. Agid also brings the Agustra base kit that we didn't adopt in the first version because it was a mess, and we chose not to adopt that because we want from scratch. This time, we wanted to use that in order to enroll our team in their listings, because the first time, they rejected. The kit was a mess again, but at least we had an official draft library supporting that to integrate those types in Volta. So, to summarize it all, this is this scenario. We wanted to adopt Volta in our fresh new project because it's cool, it's modern, and has a good UX. At the same time, we needed to integrate Bootstrap for the official kit. And in Volta, we have the Mantik UI as a CSS framework. If you ever try to mix two different CSS frameworks, you'll find out that it will be, it will collide because of several conflicts within any styles from the base one to the specific ones. The first approach was trial and error. For the sake of simplicity, we tried to simply import our Bootstrap base kit in a Volta project based on the Mantik UI. And we found out a lot of the issues, something like blood and pain, and it's in more. And consider the container CSS class used by both the Mantik UI and Bootstrap. You'll find plenty of conflicts between the two definitions fighting over the same selector. The two different libraries implement the containers in very different ways. And both of them was applying to the same selector. So, as a result, we had the whole page broken. Subsequently, we tried to fix every single conflict, but we didn't go anywhere. It was an endless list with anything but pain in the arse at the bottom. Then, Retartal decided to invest a large amount of time to find a reasonable solution to develop in Volta. The cheap one didn't go anywhere, so we went for the big one. What did we develop in Volta? We structured a new team named Pasta Naga CMS UI, which is based on Pasta Naga team. But importing only the styles needed for Volta administration UI. So, we separated Pasta Naga styles. Those needed for the tool bar, the side bar, the control panels in general for editing interfaces from those which are needed for the final user, for the public facing views. The whole process has been discussed a lot and analyzed in the PR you see here, the 970, which is quite interesting if you want to read further. So, now in Volta, the body element and some critical components have a CSS class to classify each element. We have a CMS UI for management interfaces and public UI for the public facing views. How can I use it? If you need something different from semantic UI for your Volta project, you can now edit your team.js file, stripping the import of the Volta semantic UI library and import the one from Pasta Naga CMS UI team that includes only the CSS styles for management interfaces wrapped by the CMS UI class. So, those styles are applied only for the elements specific for administration UI. And then import your site custom styles as always. To avoid the same issues I had working on containers, I suggest you to choose Pasta Naga CMS UI team for the container model in your team configuration. If you have or want to use SAS in your project, for example, if you're using Bootstrap, like in my case, it would include the SAS plugin for Razzle, which delivers the SAS loader for Webpack and it is barely out of magic. And then in Razzle configuration, you can now set SAS options as you need. Another suggestion I can give you strongly is to normalize base styles to resolve possible conflicts on base font size or weight or the base styles of your page, like paddings, mud margins, the base font size is the main issues, but you can have other ones. So wrap your components with public UI, for example, for blocks views. So while you're in editing views, you have CMS UI class applied to the body element, but your blocks or your custom views from Bootstrap have the font size normalized. But why are we here? Let's go back to the original problem that was the Agit New Cool team. All of this started for making that team for Boltz and after several months of work, it was like a year of time span. On Volto, it was possible to include Bootstrap-based team and we started creating our Agit-based team and this is the result. This is a product named Eocommune and to be the product, we needed a base command package because as I said, it will be the base for any client, any project we will make in the future for public administration. So a base command package to reuse for any installation for every customer and it is designed for the team. You can find it on GitHub and you can look for the configuration of the SAS loader or for the custom import of styles and excluding Semantic UI. But we also needed the package to be an intermediate layer between Volto and our site projects because Semantic UI is thought as a layer system where you have levels from the packages team which is in our case PastaNaga and usually you have the site-specific team. In our case, we also have an intermediate layer which is designed for the team. This is accomplished basically logically because we also have our site projects and they simply import the styles from design Volto team and use it to build our new team. In the same way, another layer for Razzle has to be done for ILSs and customizations. So we have customized the Razzle configuration to add an alias to design Volto team which is Italy and extended customizations path to various components from the Italy team from the site projects. In a different way, we replicated what's involved for Razzle's and layers and this stuff in our intermediate team. Eventually, we created a template for our actual site projects which is made by a design Volto kit which is our project template and a human generator like generator Volto and it is named CreateItaliaVoltoApp which automatically creates an actual site project with the kit with our project template where we only need to customize the variables. In design Volto team, we have this machinery of variables so we only need to change a few of them and we have a new fresh project. And that's it. Here it is Iocomune, our product for public administration. Here you can see Volto's interface for blocks and editing the content. The toolbar is made from the styles from Semant QI from PastaNaga and that's the only exception for our PastaNaga CMS UI team where we had to reimplement the styles because it was a little complicated. Everything else is made by public UI styles and blocks are in the CMS UI part because they have the controls for moving or deleting and the menu for adding a new block but also we have the public UI wrapper inside of those so the blocks have the styles from the bootstrap-based kit applied. Here are some examples of projects which are going to be live in the next months like Comune de Modena, Comune de Reggio Emilia and others coming in the next months. Thank you for joining. I hope I answered the question and I'm here for any further question. Thank you, Nicola. I guess you proved me wrong. It is possible. I really appreciate you showing us how to do this because it's always good to have alternatives and being able to note that it's doable without Semantic UIs. I'm sure others will take advantage of what you've built and thank you for sharing your code too. I see we have one question in the Slido which is that Victor showed us a sneak peek of the next version of the Pasta Naga design system. Do you think it will be too much effort to bring these updates in? Well, I guess it will be discussed also because one next step I will have in mind is to have the styles needed for administration UI in a separated bundle. So they are imported statically and we have a bundle of styles for management interfaces separated from the styles of the new team. Maybe a solution could be CSS and JS or anything different from the basic styling. We'll see. Okay, and we have another question which is Bootstrap 5 is now in beta. Do you think it will be much work to move to this new version assuming you have been using Bootstrap 4? I don't know how they are implementing Bootstrap 5, but I'm using the ReactKit implementation for Bootstrap and I hope that they will upgrade those components and have an upgrade guide so we can switch and upgrade from Bootstrap 4 to Bootstrap 5. I don't know if it will be the case for our team, for our Agile Bootstrap kit, but I guess it will be nothing different from everything I explained to separate that and import and use it in your project. Okay, I don't see any other questions in the Slido, but I would encourage everyone watching to please join Nicola in the face-to-face, I was going to say face-to-face, which I hope you know by now is the blue button below the video here in the jitsie and we will be continuing the discussion there. So thank you again, Nicola, thank you for showing us that this is indeed possible and I hope to see you later today and tomorrow. See you.
We will walk through the process of building a product for Italian Public Administrations using a bootstrap-based theme. I'm presenting io-comune, RedTurtle's first product based on Volto and the strategies we used. We will see the possibilities in Volto for theming without SemanticUI, using bootstrap and sass and what are the next ideas we could work on.
10.5446/54787 (DOI)
I'll start right in. Pleiades is a community-built gazetteer and graph of ancient places. It was built using the Plone Content Management System, which is why we're here. It publishes authoritative information about ancient places and spaces, providing services for finding, displaying, and reusing that information under open license. And we publish not just for individual human users, but also for search engines and for the widening array of computational research and visualization tools that support humanities teaching and research. Pleiades was incubated during the early years of this century in reaction to two related developments. First, the year 2000 saw the completion of a 12-year, multi-million dollar project to compile and publish a comprehensive atlas of the Greek and Roman world. This thing right here. It's from that classical atlas project that Pleiades takes its name. In Greek mythology, the Pleiades are the daughters of the Titan Atlas. It's hard to believe, but this project, which was chartered by the North American Society for Classical Studies and funded through a combination of private and public funds, was the first project to successfully bring such an atlas to print since 1874. That time lag provides a critical insight into why academic classicists took on such a challenge. They not only had a century of literary and historical research gone by, but the entire modern discipline of archaeology had been born, had matured, and had brought massive change, both factual and theoretical. And this insight dovetails with the second reason we dreamed of Pleiades. The period of the atlas's creation coincided with, to a significant degree, with the maturation of desktop geographic information systems and the dawn of the geographic web. The project was really hybrid because it started before using full up digital tools was really in prospect. But as we went along the professional cartographic shop that supported the atlas project brought more and more GIS into the production project, but it was still in the end a cartographic art project that produced a print item. Bringing Pleiades to life so the next generation took a bit longer. We wanted to bring all the data that had been assembled for the atlas into the modern world, if you will. And our first attempt to obtain funds was not successful. We were calling the thing in Internet Archive for ancient geography. The major complaint we ran into with one of the proposal reviewers was that we had not pre selected whatever combination of workflow engine content management system, wiki system web framework or databases that we were going to use to build the prototype. Keep in mind this was 2004. So Wikipedia is a relatively recent development. I think Google Earth was about to hit but had not actually sprung on the scene yet. And that was kind of the lay of the land. We had been hoping to use grant funds to hire a developer to work with the project team and making our, you know, technical stack selection. But in light of getting shot down in the review process we decided to work in our spare time to do some initial evaluation before submitting a revised proposal in 2005. So it's then that we chose Zopinplone. Why you ask. I confess that one key reason was one zip file to download one click installs, you know one click install story, we were up and running with a toy environment in mere minutes. We didn't have to prove our Java wizard hood. We didn't have to present any credentials we're not in any kind of dependency management purgatory at least we weren't aware that we were. We didn't have you know operating system incompatibilities. Plone just worked. The agency had three unboxing boxes to in particular customizable workflow rules, customizable content items, a template framework with skin abstraction layers for the UI, all that's sort of stuff. I'd be remiss if I didn't mention here that we had some helpful advice string this period by a guy named Chris Cowley, who I think is attending a conference this year so shout out to him. we were launched. Our first grant provided by the US National Endowment for the Humanities allowed us to hire Sean Gillis as full-time project developer. Sean was to stay with Pleiades in the capacity of Chief Engineer until 2013, helping us with a series of project milestones, including two subsequent NEH grants, the transfer of Pleiades headquarters to New York University in 2008. It had been at the University of North Carolina at Chapel Hill before that. And crucially, our transition to full production status in 2010. So we're not only grateful to Sean for his role in these core project achievements, but we're proud that NEH and I saw funds were able to support his early work on the Python shapely package for manipulation and analysis of geometric objects in the Cartesian plane and on the GeoJSON format specification, which has subsequently matured into an internet RFC. In the seven years since Sean left NYU to work for Mapbox.com, we've come to rely on a development team of Jazz Carta, Incorporated for Plone Maintenance, Upgrade Customization, and cleaning up the messes that I make when I fiddle with code. Our most recent NEH grant, which began in 2016 and ended a little over a year ago, allowed us to address a number of growing pains that had built up over time and turned into major impediments. The biggest change came early in that process, early in 2016, when Jazz Carta helped us upgrade from Plone 3 to Plone 4 3 and to re-host to a more capable server managed by our longtime hosting provider, tummy.com. Jazz Carta took this opportunity to systematize our deployments using Ansible and to install New Relic Monitoring to identify and address performance bottlenecks. All this was a non-trivial exercise. I think Alec may be forced by my having mentioned it to explain that in a few minutes. And one of the main reasons I think he'll touch on is that we had such a large amount of customizations we're carrying in our software stack. That whole process was worth it, though, because we very quickly saw year-over-year improvement in average page load times of nearly 75%, which was down from a painfully long average of 12 seconds to about three seconds. Now, standard deviation on both of those is pretty wide, given our global user base and some of our folks being in places where they have metered connections and slow connect times, but still really big improvement, and most of our users see very fast performance. Subsequent work has improved things even further, such that we don't have, I don't think we've had a user complaint about performance in nearly three years, even though we've had some big increases in content complexity and so on. So what I'd like to do at this point is switch over and do a demo. So we're going to grab the browser. Whoops. So this is the Pleiades homepage. What's in Pleiades? Let's get some orientation here. Since we call ourselves a digital gazetteer of ancient places, I think the easiest thing to do is to pick a well-known place and start there. How about the city of Rome? Looks like Pleiades knows about four places that begin with that string R-O-M-E, and the first one is the one we're looking for. So here we're looking at the Pleiades entry for the ancient settlement of Rome. I'll scroll down so you can see the map and confirm that it's where it's supposed to be. This is an HTML view of an instance of a custom-plone content type that we call a place resource. Our content types are presently still built on archetypes, and you'll recognize at the top of this view some of the standard Dublin core fields like title, creators, contributors, a right statement, last modified date and summary. I'm going to blow this up just a little bit for folks with small devices. In Pleiades, our place resources are the primary organizational construct of the whole digital gazetteer. They're conceptual entities, so we apply the term place to any locus of human attention, material or intellectual, in a real-world geographic context. So a settlement that's mentioned in an ancient text is a place, whether or not it can be located now. An archaeological site is a place. A modern city located at top of an ancient settlement is a place. If you and I both had access to time machines and we could find a way to agree on somewhere to meet for lunch in antiquity, that would be a place for Pleiades purposes. So basically any spatial feature that's connected to the pre-modern past and that a human being has noticed and discussed as such, sometime between the past and the present, that's a place. If we look at what's here on our view, this canonical URI business, this simply repeats the stable uniform resource identifier that Pleiades automatically assigned when the content item is created. That's there in your browser location bar. We repeat it in the view and provide a handy JavaScript copy-to-clipboard affordance in the UI because the assignment, stability and reuse of these URIs constitute one of the most important functions of the gazetteer. Especially in the early days of the project, most humanity scholars were unaware of and suspicious of the web's long-term reliability for scholarly publication purposes. BITROT was, or, you know, LINCROT was a huge problem from an academic perspective because we want to cite everything. We want to be able to find out where assertions about the things we study came from. So we provide a unique stable identifier for places in the ancient world that other scholars and students want to discuss or reference in their own publications and datasets. And as a result, over the last 10 years, these URIs have come to be used in a variety of other web applications and research datasets. So that makes it easier for scholars and students to combine multiple datasets that they gather from different places, different repositories, and use them for research tasks without having to clean or align all the geography by hand. So that's one of the important things we do. This representative point here, this latitude-longitude value, is calculated automatically. It's usually a centroid of whatever spatial geometries are associated with the place. We'll talk about more of that in a minute. First, I want to look at a slide and talk a little more about our custom content. Not Sean and Scotty, but custom content types, in Pleiades. So the place resources, like the Rome entry we've been looking at, are our folder-ish, as the Plone folks say. That means they can contain other content. In Pleiades, we allow only three other custom content types inside places. We call these locations, names, and connections. There you get recent counts of each. I think if we move to another famous ancient city, we can have a good look at those. So let's go here. So welcome to the ancient city of Nineveh. I'll let you watch the map. It's located near the modern city of Mosul in Iraq. In this place resource, you'll see we've defined two location resources right here. We use locations to store the geospatial information about where an ancient place may have been located on the Earth's surface. Both of these are depicted on the map using the color blue. One of these is a simple point geometry imported from another database or data set called DAH-R-E, which stands for the Digital Atlas of the Roman Empire. It's the work of Johann Eiffelts at the University of Gutenberg in Sweden. The second location is a polygon. It's derived from an open street map from a way or a relation in open street map. It corresponds in this case to the boundaries of the modern archaeological site. So let's look at our location here. If you take a peek at it, you can see we store more than just the spatial geometry, although that's really obvious in the case of a polygon. We can subtype our locations. Here we call it a settlement because that's what we're outlining. We might have called this an archaeological site instead, but in any case, that's the current state of the data. We can indicate the nature and extent of archaeological remains associated with the location. This is a value to cultural heritage organizations and geos that respond to disasters, organizations that are engaged in the protection of cultural property during conflict, that sort of thing, knowing whether the thing is visible or not, and if there's something substantive on the surface is very important. We can also signal our confidence in associating this particular location with the place and any of the other information, like names and connections, that the place contains. Another crucial bit of information that we store is a citation. In this case, it's an annotated link to the OpenStreetMap way in question, the one that we imported to create this location. Keeping this information does two things for us. Firstly, it allows us to credit OpenStreetMap in compliance with their data license, and it also makes it possible for users to inspect the original data source in order to evaluate for themselves its quality and relevance. So the place resource for NINIVA also contains a number of name resources. In Pleiades name resources store information about toponyms, both ancient and modern, as they're associated with a given place. Pleiades names are more than labels, and therefore they are a more complex content type than a simple string or a series of language encoded strings. So if we look, for example, at the Pleiades resource for the ancient Greek name of NINIVA, Ninos, you can see that we record one or more romanization forms. Romanization forms are transcriptions in Roman characters of the ancient name. For some languages and writing systems like polytonic ancient Greek, which is what we have here, and modern or historical Arabic, multiple romanization forms can and probably should be used for the same original name. We want them to be used because we want our name-based search to be as easy and effective as possible, regardless of where you're coming from with a particular variant on a name, we want you to be able to find it. So we're encouraging our content contributors to provide as many romanization forms as are commonly in use or that one finds in earlier scholarly literature. When the original writing system is fully and accurately represented in Unicode, we fill out this attested name field, and then additional attributes are available to indicate the language and writing system of the original, to indicate the function of the name in the original context, and to characterize the accuracy and completeness of the text or texts that have communicated it to us. The next thing we encounter in the HTML Place view are two clickable lists of what we call connections. We use connection resources to express direct place-to-place relationships. They enable us to document geographic, political, and analytic hierarchies, networks, and linkages. Although the basic functionality for connections was introduced into Pleiades years ago by Sean Gillis, connections in their full current form are relatively recent implementations. So Alec Mitchell and his colleagues at Jaskarta bear the scars of turning these into full-fledged, awesome things that we have today. So connections as we have them now turn Pleiades into a directed graph. Our place resources are nodes in the graph, and the connections are the edges. Consequently, any place can participate in an ordered pair as either the origin, or you can call it the subject if you like RDF parlance. So it's either the origin or subject, or it's the target or predicate. And that's why you see two lists of connections in the Place view. Makes connections with list connections for which Nineveh is the origin. So Nineveh is a part of whatever. The receives connections from list shows connections in which Nineveh is the target or the predicate. So the Honduru gate is part of the topographic area of Nineveh. Let's look at one of these. When you examine a connection resource in Pleiades, like this one for the administrative relationship between Nineveh and the ancient kingdom of Assyria, it becomes immediately apparent that our graphs edges are not simple. They're attributed just like the nodes are, the places. Apart from the connection type, our contributors can also indicate scholarly confidence in making the connection, the time periods during which the connection is thought to have been active or functional, and any references to scholarly literature, databases, or websites that provide evidence of it or information about that connection. These three attributes, association, certainty, temporal attestation, and the references down here, their provision to not just for connections, but also for our names and locations. So you may have noticed them in the HTML views we looked at earlier, even though I didn't mention them. These two temporal constrained attributes, not before and not after, are a recent addition to the data model, and they apply to connections only. They were requested repeatedly and loudly by our users so that events and durations, and there's good historical data for them, can be more precisely delineated. So although these attributes are not extensively used on this particular connection, you can see them hard at work elsewhere in our data set. References are not limited to locations, names, and connections. We use them extensively at the place level as well. Nineveh boasts a substantial number of them, more than average in the data set. Our goal with references is to connect users as quickly as possible with additional information about the resource and question, you know, whether that is because the item was used in developing what we're showing you in the place resource, or if it provides additional information that might be useful to you or more data of some kind. So we link to things like photograph collections and museum sites and that sort of thing. We focus on getting users to the resource and question as fast as possible. So if the thing is available online, we're going to link right to it, in this case to a site called Topos Text that was created by a guy named Brady Kisling. It has cool stuff like lots of ancient source citations in which a particular place is mentioned. But if the item is not online, but it can be found separately cataloged in libraries, we link to the corresponding entry in something called WorldCat, which is an online union catalog of academic libraries around the world. It's maintained by the nonprofit online computer library center or OCLC for short. And by linking to WorldCat, we make it easier for users to identify the nearest library that holds a copy of the work that they're looking for that apparently is not a valid zip code in the US, but never mind. For journal articles and other works that are not cataloged by third parties, we're going to link to our own geographic database. We use the open source Citero citation management system for creating and managing all of the bibliographic data that we use in our references. But we only link directly to the Citero record when we've really got no place better to send people. Citero is run by the nonprofit corporation for digital scholarship that has a robust and reliable API that we use at author time. And we use it to relieve our contributors from tedious 20th century style bibliographic formatting tasks. And if I don't run out of time, I think I'm doing okay. We'll see it in action in a minute. You'll notice above the reference listing that we have a place type attribute. We've been looking at settlements so far, but Pleiades gives us the ability to describe many more types of ancient places and spaces. So let's see that in action real quick. All of the place types in our controlled vocabulary are listed in our advanced search on the categories tab. You can see we've got a bunch of them. Let's look at lighthouses. We can use that to quickly find things. And I want to limit our results to places. I don't think I'm logged in. We'll only get published items regardless of what we ask for. So here's eight lighthouses that are indicated as such. This one is the famous lighthouse of ancient Alexandria on the Egyptian coast. It's one of the original Seven Wonders of the ancient world. So this lighthouse was super famous in antiquity. In fact, it was simply known as the lighthouse, pharaohs in ancient Greek. So it's a return to the advanced search. I could demonstrate another key aspect of the data contained in Pleiades. Go back to the categorization tab. There's a lot of places in the ancient world that are mentioned in historical texts that we can't locate today with any kind of precision. Pleiades records these as well. And we normally add a place type value of unlocated to the corresponding resources. And that lets us hunt for them in the advanced search. You'll see we have just a few unlocated places. Let's narrow it down to unlocated places that have been added recently. Here's a few. The funerary grove of epicrates sounds interesting. Right. So typical of unlocated places in Pleiades, you'll see that no locations have been recorded. We do, however, have some connections that are used to provide associations with other geographic features known to be associated with this one on the basis of ancient sources. Nacroson is interesting because it's not precisely located either. But scholars have theories about where it was. In fact, you can find three competing identifications in the scholarly literature. So these are archaeological sites that are in the right general area. But there's no definitive evidence to say which one was this one named Nacroson that shows up in some ancient texts. So you'll see that we have three discrete locations associated with this place resource. And each of them makes use of the association, certainly the attribute to indicate the tentative nature of the association. And if you look at the map, you'll see each one of these is represented with one of our blue crosshair icons, which is the point location. So now I'm going to talk briefly about this representative point in more detail. It's indicated on the map with an orange circle icon. We've already noted its presence in the textual form and the attributes listing. In Pleiades, representative points are calculated as an approximate centroid of the locations associated with the place resource in question. Those of you who are familiar with Python Shapely library will recognize the terminology representative point. It's not always a centroid, but it often is. So in cases like this, the representative point will fall somewhere between the various location geometries. In similar cases like Rome, Nenove, and the forest, they'll fall near the center of the locations. We've also discovered that there are times when we know where a feature is, but we don't want to use a location for it. What do we do about the representative point then? And why would we want to leave locations out in the first place? Well, big, complex broad features are a good case. Study. Let's consider Hadrian's wall. There it is. So what's going on in the map here? We're visualizing two things for this huge complex fortification system. The spatial coordinates of each of the connected places that together constitute the fortification system. So these are the green bow tie icons that are all stuffed in here. And this time, the orange circle denoting the wall's representative point is calculated on the basis of those geometries, since we don't have any location objects defined on the wall place resource itself. We're starting to use this technique with kingdoms, provinces, and other administrative and regional divisions as well. So here, for example, is the island of Sicily. All right. It was also a Roman province. And it's here that the GIS folks are usually going to say, hey, why don't you just do a spatial containment or an intersection query instead of messing around with all these connections? It seems like a ridiculous amount of work. And I'll give you two interrelated reasons. We're not just making a map or constructing a geospatial data set for spatial analysis and graphic visualization. We're creating a data set that could be used for lots of tasks in addition to those. We want a data model in which every place known to be in a province, let's say, is properly associated with that province. A pure spatial query isn't going to find the unlocated places. Moreover, ancient regions and administrative divisions are and were sparse in spatial terms. We may know lots of settlements on a particular area of the Earth's surface, but we may only know about the belongingness of some of them to a particular historical region or administrative division. And moreover, we'd be making a huge error by assuming that ancient administrative regions were solid polygons in the first place. Borders could be fluid or ill-defined in real fact. And the internals could look like Swiss cheese if all the borders were, even if the borders were precisely defined and you could map them accurately, which we usually can't. Let's consider, for example, Delphi. This is a famous ancient sanctuary of Apollo in Greece. Delphi wasn't just a bunch of temples. It controlled a significant land area and therefore had an important economic role in the region above and beyond its religious and diplomatic functions. A naive observer would assume that during the Roman imperial period, this site would have fallen within the Roman province of Achaea. But legal documents demonstrate that Delphi, in fact, had a direct bilateral relationship with the emperors and it so bypassed the provincial administration completely. The Roman governor of Achaea could say nothing legal about what went on in Delphi or involving Delphi's neighbors, even though he had authority over the neighboring communities. Delphi gives us a chance to talk briefly about another important aspect of Pleiades' places. We call them alternate serializations or alternate representations. So far, I've been showing you the HTML versions of the content in the Pleiades' Zope database. Those of you familiar with how Plone 3 and Plone 4 work will recognize the term base view. When you look at Pleiades' places, names, locations, and connections in a web browser, you're looking at rendered versions of a customized Plone HTML base view template. Those of you who program Plone will know that it's possible to create alternative views for content items and that's what we've done. You'll see the links on every place page under the heading, alternate representations. Clicking on any of these links, I'm going to click on JSON, will get you the corresponding format. Complete with the appropriate MIME type and the response header so that your browser can try to figure out what to do with it. Here, for example, is the JavaScript object notation serialization of the Pleiades' place resource for Delphi. I'm not going to click on it now for time reasons, but you can click on the KML link to get a keyhole markup language serialization. And if you've got Google Earth installed and if your browser knows about it and if you don't have an old version of Adobe Photoshop that hijacks that particular MIME type, you can view your location data over Google Earth's spatial geometry. We've exposed these alternate representations in such a way that they constitute a read-only, open application programming interface that conforms to the web architectural style known as REST. That is, representational state transfer. We use a uniform method of constructing the corresponding URIs across the whole site and you see that summarized here. You can discover all of these resources. I mean Chrome, aren't I? Different set of keystrokes. So you can discover these things, whoops, in, in, on the web by parsing the link elements that have the alternate value in the rel attribute out of the HTML. You can also use HTTP content negotiation to get at them in Pleiades. And the whole idea here is to make Pleiades an easy resource to reuse from other web applications. We also publish RDF versions of the content and that makes Pleiades a linked open data application. We're proud of the fact that Pleiades was the first ancient history website or web application to meet the criteria for addition to the linked open data cloud diagram where you'll still find it today. But if you don't want to script a web app against Pleiades or, you know, set up a triple server and do all that sort of stuff, instead you'd like to download all our data to do things with on your own in your own applications or tools. We try to make that easy too. So we have a downloads page that you can get to and there we provide access to regularly refreshed bulk exports of the published data in several formats. Our JSON exports are the comprehensive ones. So they provide every attribute of every published place, name, location and connection. The other formats are bridged. So the comma separated values KML and so on. Every published object is represented. But only some of the commonly used attributes are provided in each. You can also get packaged copies of our data from various third party repositories. I've listed some of that here. And there are also a couple of really cool derivative data sets that are built and maintained by our partners. So how does all this information get created and updated? Well, we have a clone workflow for that. And I'm going to show you that very briefly. It looks like I've taken a little more time than expected and Alec is probably getting itchy to do some talking here. So let me just do a real quick demo. I'm not quite sure how the time got away from me. Oh, I see what's happening. It has to do. I actually got good time. I was looking at the zoom time instead of the clock time. I apologize for all the noise. All right. So authenticated users can add content. And we do that in a clone like way. I'm going to use a username that I have assigned that doesn't have any special privileges. And I can navigate to the places top folder. And I can add a new place in the way you'd expect to in clone four. And you get a form that lets you add stuff in. So let's say we were going to add Atlantis. Then you fill out the form. It's unlocated. We can have multiple types. So I can call it an island. A settlement. So on and so forth. I won't fill it all out. References. I'll show you how that works in a minute. We can add tags. I could credit or blame other people. And I even have a details and field rich text field in which I can write a long essay about how awesome Atlantis was. Once I save that, we're going to start in our default workflow state. And that's drafting. So it is visible only to me and to super users. And I can continue to edit this. In particular, I can add locations. We have a form that lets you put in an open street map ID and import the geometry from there, which is very handy for features that are extant and visible in the world today. We can also add names using the typical clone manner by entering stuff and using combo boxes for our various vocabularies as we encounter them. So for example, we're not going to let people randomly enter and type the name of the language. We're going to control that so that we don't get typos and that sort of stuff. So that's the basic process for adding something. Once you are ready to have it reviewed by our editorial college, you submit it for review using the typical clone workflow transition thing. We encourage people to write useful comments beyond initial revision. Although in this case, I think I'm not going to bother. I submit it for review. Once I've done that, if I'm a regular user, you'll notice the as I am logged in now, you'll notice that the edit option on this thing is gone. So we on moving depending, we lock the content item for additional changes by the user who submitted it. That keeps us from multiple edits involving the editors and the user at the same time. If I were to log in as an editor, I could look at this and revise it and in this particular case send it back to myself as deficient in terms of its content. But that's the basic way we work with new items. And so we allow people to come along and add whole new places for consideration or they can visit an existing place and add a name or a location using the same or connection adding the same kind of button affordances and then those get managed through the workflow process separately. Now modifications to existing resources are different animal. We make heavy use of clone iterate and I'm going to find a particular example and I'm going to go to another browser and log in as an editor. I was already logged in but I've logged out nifty me. Okay. So I scouted out this particular place in advance. This is a place called a gigli located on the North African coast. You'll see here under the names listing, we blow that up for small devices. We have a Latin form of the name and then picked out in orange here we have a Greek form of the name. That means this has been added. I'm showing you now the editor's experience of these that have just been added and I can go through this and approve it for publication. I promised you clone app iterate but in fact I got lost in my notes. Here's clone app iterate at work in Pleiades. At Asina is an entry in clone in Pleiades that I'm going to go back to my other user because I don't want you to be forced to see the editorial view of all this. It's still got some raw data in it from when it was imported from the original Barrington Atlas data. It's had some modifications made since then but we'd like to improve it. The way I do this as a user, I don't get to edit it live. This thing is locked for regular users to use. What I have to do is I have to use iterate to check out a working copy. I've checked out a working copy of the place. You'll notice it's copy of in the URL stub. Then I can edit this in draft to make the changes I want. I can modify any component of the presentation. I can iterate on that and then save it in my draft working copy. Once I've got that the way I want it, I can submit it for review just the way I would something entirely new. Back in the editor's view, suppose I'm doing my daily editing task and I want to see what's waiting. I hope this is not an embarrassingly large number of items. No, it's not. Here's that andesina entry that my alter ego submitted for review. I can look in the history to see what was said about it and then I can move it on through the workflow process. What I'd normally do if I think this is good to go is I would check it in. We bring forward the check-in message from whatever the workflow transition message was. Thanks to that editorial action, that summary is live in the database and it replaces the published version. The history will give you the ability to see back what was done previously to this particular entry all the way back to its original ingestion in the database in 2009. I think we are now at the point where I should turn things over to Alec and drive the slides for him and see how we go. I'm going to go really quickly here because we don't have a ton of time. Let's see if we can get the slides up again, Tom. I'm not seeing them. There we go. As you probably understand from Tom's demo, Pleiades is a pretty complex system. In fact, it comprises 25 custom Python packages. That complexity can make upgrades really tricky, especially for a development team that came in unfamiliar with the original code base. When we did the upgrade, we ran into various issues. We made some testing fixes, had to update a bunch of custom views and customizations of existing views, removed some dependencies on some old packages, removed various crufts from add-ons that had come and gone over time. Then we made a variety of performance improvements. If you saw Philip Bowers talk yesterday, you'll probably recognize a number of the sort of performance gotchas that we were dealing with, in particular things like lacking no-call statements on path expressions, which cause objects to be fully rendered while generating a template. There was a bunch of inline rendered JSON in head. Tom showed you those links to data in head, and a lot of those used to be full JSON. In some cases, it got very complex and was very hard to render. We moved that stuff to ASync, and that gave pretty large performance improvements. Then over time, we've also replaced AT Vocabular Manager with registry, Plenup Registry-based Vocabularies, which means fewer catalog queries, fewer object lookups, and some significant performance improvements. Then, as Tom mentioned, we implemented Ansible to ensure we had repeatable deployments and that the server wasn't a special snowflake, which is a term we might recognize from Calvin's AWS talk yesterday. Plone plays a big role in Pleiades, despite the fact that it's a very complex system that doesn't necessarily fit into traditional content management. You can see that it uses tons of features from Plone through Tom's demo. We've got content types with complex custom schemas, hierarchical content models, custom permissions and roles, custom workflows, and then a pretty heavy dependence on Plone app iterate for content staging. The fact that iterate is based on adapters makes it really easy to customize the data copying and merging logic, which Pleiades does extensively. Then on top of that, we use custom views for the various non-HTML representations for exports, and we depend on a variety of Python libraries for doing that, some of which, as Tom mentioned, came directly out of this project. In terms of recent work that Jazzgard has been doing on Pleiades, we've been preparing for a while for a Plone 5 upgrade. We've made improvements to the bulk import process. We've completely eliminated AT Vocabular Manager, improved the iterate customizations so that that checkout check-in process works pretty seamlessly. We've made completely comprehensive JSON representations, improved this concept of promoting locations to places, which is something that demo didn't quite get into, but it's another workflow-y action that allows you to actually convert subordinate content types into places. Then a bunch of other stuff, including better mapping systems and updated geographic information. Eventually we do plan to move to Plone 5.2 and Python 3, but the whole site's done in archetypes now, so we're going to have to move everything to dexterity, including all of the connections and relations, the various data. There's going to be a bunch of custom behaviors because there's a lot of shared functionality in terms of bibliographic references and things like that. All the custom views are going to need to be updated. Our plan is to basically do a new updated implementation of the site's dexterity, test everything out, leverage the fact that the site already has a comprehensive JSON export and a bulk import update feature to do the migration. For non-custom content, we would just use collective JSONify and transmodifier. I'm not sure that this site is a good fit for Volto at this point. As you saw from Tom's demo, it's really a data-driven research tool, so the need for customizable layouts and the other kind of nifty things you get with Volto is pretty minimal at this point, but who knows? That's the future. Anyway, that's it from me. Thank you, everybody, for coming to this talk, and thank you, Tom, for that wonderful demo. Yeah, thanks for me, too, to everybody and to you, too, Alex. Hey, great. Thanks, guys. That was fascinating. It's really cool to see how much work and all the cool stuff you can do with the Plone site and especially in the context of this. We have one slide-O question from Philip, and that's, what do you use to replace AT vocabulary manager as a collective taxonomy? No, it's essentially just a custom Plone app registry set of vocabulary, a couple of custom classes that pull vocabularies out. In the case of Pleiades, there's a view that makes those vocabularies appear as content-ish things to make it the same as ATVM. Yeah, it's custom code. There's an add-on. All of these add-ons are actually open source. I'm not sure how useful they are to anybody outside of Pleiades, but Pleiades' vocabulary add-on has all that code, and it used to all be ATVM in there. No, it's the new stuff.
Pleiades is a community-built gazetteer and graph of ancient places, built using the Plone content management system. It publishes authoritative information about ancient places and spaces, providing services for finding, displaying, and reusing that information under open license. Pleiades development started in 2006 and went to production status in 2010. The site continues to serve scholars, students, and enthusiasts around the world today. This case study will present the history and major milestones the project has seen. We will emphasize unique features like customizations for geospatial content, maps, and data serialization; modeling of uncertainty and unknown geometries; and bibliographic data management. Co-presented by Tom Elliott (New York University), the long-time project director, and Alec Mitchell (Jazkarta, Inc.), a long-time lead developer on the project, this talk will also address the reasons for choosing and sticking with Plone, as well as expectations for future work.
10.5446/54788 (DOI)
Welcome back. We are going to be presented by Nilesh, who will be talking to us about Volto and personalizing Volto deployments. As you may recall, Nilesh was a Google Summer of Code student in 2018 and the creator of the Create Volto app tool for Volto, which many of us have been using and especially this past weekend, a lot of people were learning how to use. Nilesh is from New Delhi and he's joining us late in the evening for him. He's a front-end developer consultant at Oda Web, which is a super powerhouse in the Volto world and the Plone world in general. With that, I'm looking forward to hearing more from Nilesh. Hi there. It's my second talk for this year. Let's start. Let me share my screen first. Okay. Let's first move this thing. I don't know why it's not clicking yet. Let's start the talk. My last talk was on bundle splitting in Volto. I talked about how do we use Webpack and the loadable component library to split bundles. This talk will be more about how we progress into customization of Volto, how we make Volto an completely extendable object. As the title suggests, the journey towards the customization. We can continue. Coming back to from the progress we have taken since 2018 to this year. If you see that in 2018, it all started as far as I am involved with the community. I remember it started with the design by Albert Casado. Then he gave us the design in the Zplane and all the development started. Then it was named as Plone React. Then we changed the name to Volto. The Plone React has been very, very, very, very beta stage. Then we used that name. Then coming then we have also we need the escape folding to very badly because users can't, you know, they can't just have that, you know, cloning that Volto thing and they can really start with it. We want users to create their own projects from scratch. Then as a Google sound of course student, we build this create Volto app. I had been the Google sound of course student and together with Victor, we build this nice package, create Volto app. I think it's been it's in deprecated like it will deprecate in the favor of the your generator, which is the decision we have taken because your generator we have the communities very good and they also prefer some very nice CLI options. So I think it's very good to move to a like a software with a very good community. So coming back to 2019, if we see that like they are some we have enhanced the Volto editor which is made in draft.js and the concept of tiles we have like try to extend a lot extend this so that a user can create their own tiles. Yeah, coming back to tiles, we are now using it as blocks. So we have migrated that tiles to blocks as the name we have changed because we find more clearly and more generic what we want and it's just like I will be talking about later what a generally a block is. So coming back to 2020 this year, I would say that this year will be this year is the like main major major year for Volto add-ons because we made a tremendous amount of configuration in the in the add-ons area. So we have extended the concept of blocks to behave it as add-on like thanks to Tiberiu has created the whole add-on architecture and we had listing blocks, the layout block we have so now we use add-on blocks, the widgets like everything in the form of add-ons only. We have the DX editor, layout editor which I already wrote that and the yeah the yo generator which is talking like the replacement of a clear Volto app. So in the next year will be I think like I didn't know like we don't have I think maybe the clear roadmap but yeah something which taken from the talk with which Victor has mentioned like we'll be having the Volto as default front end and in the plan six and we also the blocks transform so we can have an API that will convert a block from one form to another form. So let's hope for the best. So coming back to the Pasta Naga UI so it's created when we started I think it's created in 2018 or like 17 and so by Albert's Casado when I saw the designs in the zipline so it's a design system for Volto components and which is it's designed on the top of semantic UI theme so the theming engine we use is a semantic UI and one can create any theme based on the semantic UI and the Pasta Naga is just one theme. So there is a process how do we like create the theme on it so there is a like to config JS file and we override it with the like the components we have so like if you want to override a modal then like we just change that modal component by using the different theme so the theme can be like configurable. Yeah one can also create the theme like following this URL it's a very easy process to create a theme so one can easily create the theme. So this is how like current designs in Volto looks so yeah in the talk Victor gave us the new insights about how Albert designed the new Volto designs I'm pretty much excited to see them so but right now this design I think right now we use these these are also designed by Albert and these are pretty much awesome we use this in Volto right now. Okay the scaffolding tool so as I mentioned that create Volto is a tool which scaffolds a Volto project with a single command so it's now like creating in the favor of the your generator so I think we have been doing this in the trainings we had so the your generator generates a Volto project using the yo-man tool and complete a CLI solution to configure your project on switch also offers because it's more than it's more of a thing which is configurable so we can add addons using the addons flag and we can pass the non-interactive flag if you don't want like questions to be asked in the start and yeah we can pretty much configure it. Okay so back in 2018 when we started in like Polto was behaving as a like customizable update on the very start so we just follow this component shadowing approach so what we do is we generally like allow users to override the components and create their own Volto sites so to override a component we have to match a folder structure there is a like folder structure which a user have to match and that folder structure is in the customizations folder so let's say if we like want to override a logo so yeah so in the customizations path we have these components and if we want to override a logo we'll locate that logo component in the Volto core and that will copy the logo component and the the things we want to change and we'll put it there and restart the Volto the Dev server so we'll see that the logo component on any other logo component is just a simple component and we can just override any as much as anything has any component from it so if we see this Volto component Volto logo is now replaced by the Pastunaga SVG so this is how generally the overriding a component shadowing work so just we follow the like I mean we just follow the path and we override and the thing take effect so so this is how we it works and let's say we have this add-ons so what is an add-on is so the modern technique is the components shadowing but we want Volto to behave more of like a highly customizable object so we want these blocks the widgets and all things should be playable and they can be like a user can work interactively and on the isolate I mean the isolated from each other so they have this add-on feature and I mean they can be they can be easily plugged into the Volto site and they can use it this is how an add-on works I'll show you how to configure these add-ons and like how to deal with this so we have blocks so as you all know like blocks are the building blocks of a I mean building components of a website so anything anything which which is any website and any any of the web pages we see they are most of these are composed from blocks only so editor can compose a single page using a Volto data in the form of blocks and these blocks can be customizable and pluggable in the form of add-ons so so we I'll show you how we like override and like like blocks any project into a Volto so however we like develop with blocks so we do have edit and every block has two components edit and view components which are basically a vice-vip type see so what user can see is what user is we user can get so it's it's it's basically there's not much difference between how I did and the blue edit and view components that they much there's no difference between them so it much look as basically the same a couple of blocks demo we have a listing block so what just creates the like the content object and show us like what content objects are available so we can navigate through them we can rearrange them and from the site by itself we have the toc block the table of contents and the the columns block so started by the web and the column block has like you know as you can see from it's allow us to create different different crates based on the column sizes so it's currently it's 30 70 size and it may be like we can have 50 50 size and we can also have the whole width size so it's configurable so yeah and we can forget next so so this is the main like how do we create a new block based on the add-ons configuration how do we override it so create a new log over adding overriding the configurations in the project config so let's hop on to that so we do have a like a add-ons config in the config so in the every add-on should have an index jspile in which the configs they are coming from the prone volt or core so on the world or core this configuration we generally overrided so config dot blocks dot block config and here we can add their add-on name and we can provide the title the so this is a block this is for mainly for block and every block has an edit and view component and we can like group them into a into a media according to the side in the block chooser so we can also if any component any block needs some fixtures right if it needs some let's say some fixtures some sample data we can also add them through the canvas in the through the config settings which we have in the bottom config settings demo in the so we can over this is also an overriding so we override it and add the demo demo block so this is how we create a new block and when we like I mean say wait we do have a new block created on the on the block chooser so there will be a new block let's see if it's a demo block and if I show you how like it works is we have a new like a component and if we click on the new block the media it's created demo block so hey we have we have just registered a new block so it's a new custom block we have and this is how we generally add an edit a new block so there can it can be anything here anything so it's just a text now but it can be very like a complex thing like a table from table to a hero block or maybe the max block so could be anything here so let's uh on that the this is how we create a block and this is for the blocks how we like configure it and like move on to creating much complex blocks in within no time let's move on to the slate editor so right now vol2 uses the draft js editor but it has I think some internal limitations that for the plugins we use we need to use some external libraries for it or write our own plugins to maybe let's say if we want to serialize the data then we need to use third party plugins for it and but but in the state we do have inbuilt plugins to serialize and decelerate data so it's so we I think we also I think on the top also I think we plan to use a slate in vol2 core so let's hope it's inculcated it's initially like readout by the other web or the other web team and it's also the customizable so I'll show you how we customize the inline toolbar buttons and also like add the inline like inline toolbar buttons and we can play with it um okay and yeah this is some there's one similarity between drafts I think and the said they also work like they both have the internal state so they work like they both works for creating the rich text content but I think yeah the plugin ecosystem is I think the difference so let's see how we use slate editor so we have a customer custom inline toolbar buttons in this slate so this is how we this also I think we have in draft js so we how this we inculcate a new toolbar button into our slate toolbar so we have this make inline element and it accept some options this option there's nothing but they include the title and the tool bar button icon and the the element type which can be inline and so this is an object it returns a new function which takes the initial configuration and returns the new configuration of like big settings so also we have we pull the delay from the config settings and we add this toolbar button so show how we like how we see so this is how these buttons will look let me show you this one button I actually updated let me go to okay if you type something here let's say yeah so if we go to the and we if you see that there's I just created this button with a smiley tag so I mean it's just a way to customize and add our own like inline buttons with these slate so we can like do it pretty much easy way so just a very less amount of code we have to write and we can overwrite a demo button it so it's just it's just awesome actually so let's go back to presentation so this is how we button add us button from the line toolbar also yeah so the other thing is we can also like have a custom widget so we we can also in instead of blocks instead of like instead of widgets I mean instead of the buttons we can also have a widget so this is how we like so we do a widgets type so it takes a it takes a widget type and any widget we created it should be going to a like not exactly the type but we do we also have if we don't have a generic type that's a type boolean or anything type can be nothing so we can use a widget field there so instead in the widget field we can like possibly like add the name of the widget and we can use it's in the in the schema so in the schema we do have this this schema we use within line form so if we see that it accepts a widget title and this widget type this widget type should be this is that widget we are going to use in the in the widgets so also the it accepts a schema so if you want that we did to like have some particular schema so we can basically add it so if I show you how we use widgets we do have let's delete this we do have let's say we have a cordon block and if we like put something here like say panel one and panel two we do have these widgets and how these widgets come is first we registered a widget on the cordon this is how we like we registered a widget config widget type panels and so when we registered a widget then we use this panel widget here just an example and then we use it in the schema so in the schema if we see that the widget type is panel you see this so this field is going to use this particular widget which we have registered and that schema will be going inside the cordon schema and we will gonna use it in the in the in the in the in the in line form so whenever we see that then we can show this thing through this widget only so also like changes if you click on this it's not clickable actually okay yeah so this how I think we can widgets overriding the widgets configuration and I think yeah so like it's like a short talk about how do we like how customize things how do we how we create components the overhead components so how we extend the volatile to like from the like from blocks to widgets in the form of add-on so I think yeah thank you thank you nilesh that was a good look at how to make custom blocks custom widgets it's someone like me who's not a great coder always loves to build on top of other people's work so it's great to see how how easy it is now to make these customizations I would encourage everyone who has questions for nilesh to join him in the face-to-face which by now I hope you know is the blue button below the video and I'm just checking to see if we have any questions it doesn't look like we have any questions in the slido but thank you again nilesh and please join him in the face-to-face yeah it's
As from 2018, we are aiming to develop Volto as much extensible as possible. It all started with the aim of behaving as a highly customisable object. So a user has power to develop from a simple webpage to enterprise grade intranets in his/her own way. This talk generalises how we approached from modern customisation techniques like component shadowing to building highly modifiable and pluggable blocks/components, widgets in the form of addons.
10.5446/54790 (DOI)
Yeah, thank you Kim. Yeah. So hi everybody. Good morning. Good evening wherever you are. And welcome to the Volta Block Development Patterns presentation. Now, I could try, I could have tried to put a lot of code in this presentation. I'm going to try to keep that light and just go over a couple of concepts and just mention some things that we as a community and we as Volta developers need to look out for in the future and basically a bunch of patterns. And one of the main things that makes Volta so attractive for us is the power of Volta blocks and they are not just powerful but also immensely easy to develop for. And it's a great success story in that we had, I had colleagues that were new to Plone or didn't knew anything about Plone. They had some react training or they didn't have at all. And they were very, very productive in quite a short amount of time. And this is one of the biggest selling point of Volta, I think, in that we can scale with Plone. And just to wet your appetite, I'm going to show you a couple of blocks to see exactly how powerful they can be. And this is one of the first add-ons actually, like the most important one that I could have developed in the sense that it also contributed to the adoption of Volta by our biggest client. And this is Volta plot recharts. And Volta plot recharts is the power of a chart editor, a really big and complex chart editor in Volta as a block. And just to quickly show it live, because a picture doesn't do it enough justice, I can go into this country factsheet, I can just click on the open chart, I can have it connected to a data source coming from a CSV file or from an SQL endpoint. And yeah, it's like everything here is customizable, including the colors of the bars, the format of the data. But it's so much power, you wouldn't believe it. And when I initially shown this product to our client, it was like instantaneous adoption of Volta. Nothing else mattered. Okay, so next, sorry. What do we have? We have a Volta search kit. And yeah, you all remember Philip's slide from yesterday. And one of the main points in that slide was that the best code is the one that you don't write. And this is a very good example of code that we didn't have to write. And this is a search, a block that integrates with Elasticsearch and to create this block, because it's not a genetic framework right now. We have to, we had to write some code just because, but yeah, there are big plans about this search engine. But just to give you an idea of how easy it was to create this, I think it's about 700 lines of react declarative code. More or less just declare the facets, declare the indexes and so on, but need to be read from the database. And here it is in into production. You can use the facets, the library facets, you can use the sidebar facets, you can search with words and it's Elasticsearch is very, very fast. And yeah, it's like a really, really good example of why it was so important for us to adopt Volta and why Volta came just at the right time and couldn't have come earlier. Because Volta will open a huge ecosystem of add-ons and of products that we can integrate in clone with almost no effort at all, comparatively to everything else. And this is also, let's say, one of the things that I believe, for example, makes the difference between improving Volta and improving the JavaScript story in Plano Classic. With Volta, with the huge ecosystem of react extensions, which can be easily integrated, we can leave others to work for us, so that we don't have to reimplement all this. And next, let's see what we have. I want to talk a little bit about our context and the things that we have right now. And it's this title, Zoocomponent Architecture is still 100% greatness and I hold this as very, very true. I'm a big fan of Zoocomponent Architecture. And I think this is what makes Loan what it is right now and a powerful content management system that we can all share, that we can all work on with our conflicting needs and our conflicting ideas and we can all put our effort into it and we can collaborate and make a great open source product. And I mean, we make fun of it sometimes and we laugh at its verblessness. Personally, I think that if we would cut, for example, the multi-plone database, supporting a single database, we might make our patterns simpler. But in the end, with Volto, we can leave that complexity aside because we can just focus on the front end and just have a lot less code that needs to be written in the back end using Zoocomponent Architecture. And this makes it easier to integrate new people into loan projects. And this is again, as part of our overall picture, as part of our context, the fact that we need configuration registries and we need them in in-plone where we have multiple configuration registries, we have the global component registry, the local component registry, we have the configuration registry itself and Volto on its own, it has its own configuration registry exposed as a single object. And there's no Zoocomponent Architecture equivalent right now in Volto, although at some point, me and my colleague, Aline, were discussing the fact that there are scenarios where we actually want a feature such as an adapter for Volto and there are use cases where it makes sense. I mean, the adapter pattern is on its own a very powerful abstract concept. And to have adapters, you need to have types and concepts. And we have some types like, for example, footer, break, and we have some concepts like the views in Volto, the widget registry, but we don't have abstract concepts that can breach Volto. And Victor and I were discussing about introducing something called, I'm guessing that they would still be more or less attached to not really generic. I don't know, who knows. We haven't actually got to really work on this and I'm quite excited to get the time to work on it. Yeah, so Volto in itself is actually quite small. I mean, you can easily go through all the source code in one day, two days, and you can really learn it and understand it. And I encourage you all to do that, because it's a great learning opportunity. The code is super high quality, as with any code that there are things that you wish could be improved, but it teaches you a lot. And you can, yeah, that's like the fastest learning thing that I can recommend. And there's another thing that we have to take into consideration that when we think about the Volto horizon of things to expect, is that patterns make things extendable. And without pattern, you cannot really make something extensible or you can make a concrete thing extensible, but not you can't general, generally being a young project, we're still discovering them. And of course, the bigger, the massive developers, the wider amount of content, the wider amount of code and use cases, and these patterns emerge. And yeah, I've, let's say brainstormed a little bit. I've expressed some of our, some of the patterns that I saw, and I'm just putting them on the table, and I'm saying, okay, we Volto developers, we have to start using these patterns, so that we can also start going a bit further to make our code, code. And the first pattern, let's say I'm going by calling them function set center pattern, in the sense that they are something closely related to functionality. So we have block variations, which are default block values, let's say. Imagine that we have the case where I want, I don't know, the quote block, let's say, and I have some presets, like is it red with big padding? Is it smaller? Is it, I don't know, things like that. And right now we don't have this mechanism in Volto. It will probably be included so that we can, we can have, if not, I mean, if not necessary in Volto, if not directly needed, at least something that's available to Volto projects. And we have block extensions, and the block extensions are templates for blocks, and block can use a template, for example, the listing block in Volto, you can choose between album view and listing view, for example. And what I'm trying to say is that we need to make this a pattern. We need to provide functionality in Volto that points the developers to this pattern and say, hey, you can make your, when you create a block, make it extensible from the beginning and think about how it can be extended, think about which parts of it you can make reusable. And this will make, this will make our add-on and Volto blocks story go big, and it will make it great for us as a community, because we'll be able to not rewrite code every time, but yeah, we'll be able to reuse and share our own code. We have another pattern, which I call block embedding. And you have probably seen already this pattern in action. When, when Resman, for example, showed the Volto grid block, it was using our block embedding pattern. We have a Volto columns block, which again, we use is our block embedding pattern. And yeah, this pattern came from the idea that, first of all, when we edit inside Volto, when we edit the blocks, we are more or less doing writing in the equivalent of tiny MC, right? Except that all the paragraphs are on their own blocks. If we abstract the fact that each paragraph is stored as a block, then what we are looking is just a flow of text, right? So I consider this flow of information, this idea that the content inside a Volto page is more or less something that you would expect to see in a rich text area inside the flow. I consider it important. And for this, we have created this Volto blocks form add-on, which is just actually Volto code copied from Volto code extracted, split divided and cleaned and so on. So we took a file, but in Volto is about 2000 lines. We split them up, we made it reusable and it can serve as the basis of many other add-ons. So yeah, it's basically the block engine separate and you can embed that block engine into other blocks, for example. And that makes it possible to make columns, to make the accordions, to make the group blocks and so on. And yeah, the intention is to clean up that code, to create a blocks API, to like set it in stone and push this into Volto code. Yeah, and continuing with function-centered patterns, we have block transformations and this is a pattern that we are also thinking to integrate in Volto. And the block transformation is when, for example, you want to change the type of a block from one type to another. And for example, you could have an accordion block based, for example, with the Volto blocks form framework, which stores, actually, which sets a condition on how the internal block data should look like. And this internal block data pattern is shared among all the other users of this Volto blocks form add-on. So you could change between block types. You could go from the columns block to the accordion block to the, what else, tabs potentially. Yeah, you, I mean, it's a strange it's a strange transformation, but it's actually it actually makes sense in this, because you don't want to create an Uber block, something that can, has all the functionality or yeah, you could, but if this thing, if this capability exists in Volto or will exist in Volto at some point, why not allow our ecosystem to be able to transform from one block to another, even for example, if we lose some of the block data in the process. And yeah, for example, I consider, I mean, I've talked about one-to-one transformation of blocks from an accordion block to a columns block, for example, but we can use the same transformation, or at least I would use the same transformation to also group multiple blocks, or transform multiple blocks, because I consider, for example, the group operation of blocks. And if you have tried Gutenberg, for example, the workplace-based editor, blocks-based editor, it kind of functions like the Volto block editor, that one also has this operation. And I can imagine an add-on or a feature in Volto where I can select multiple blocks and say group them. And maybe then I would promote this to an accordion block or promote this to something else. We want to have the same kind of usability with Volto as we do with rich text. And at some point, we lost some of this usability. And I've been working on bringing this usability, for example, the newly introduced multi-block copy paste that was added recently in Volto is also about this. And I'm also going to show a little bit more on how, or rather, the work that we have done to make Volto a little bit more like what we are used to in rich text and tiny MC. Okay, so next pattern, we have directed blocks. And those blocks are blocks that manipulate other blocks. And yeah, I'm going to show a little bit how this helps because it's a great thing. Actually, if you watch my 200% speed Volto Slate lightning talk, you might have got a glimpse of that. Okay, so this is a Google Docs template. I'm just going to copy from it. And I'm going to go here inside our Volto website. And this rich text field is actually Volto Slate. And fingers crossed that it works, I will paste. And you can see that my, I mean, you have to understand, Volto Slate doesn't customize anything in Volto. And by default, all the add-ons that we are working with, we don't want to customize anything in Volto. We want to use the Volto API. And I'm actually very conservative when it comes to Volto API. We try to, we try, we try not to change it. And as an anecdote, Victor and I had have a almost year long discussion on the introduction of a couple of keywords in Volto code. So, yeah, you have to, I mean, you can count on Volto code being well taken care of. And if Volto has got to version 10 right now, it's not because, yeah, we like to break things easy. Anyway, so my, my paste succeeded. And I have multiple blocks here. And why do I have multiple blocks? Because the contract in Volto blocks is that each paragraph is its own block. And each, each, let's say, primitive of the rich text, for example, this image has been created as a Volto image block. The tables table here is a Volto table. And it has been reproduced from the paste. And yeah, another, another, let's say, director block. And this is something that I think it's bordering inside it. And yeah, it's, it's like a demo of what, you know, just, just having fun. Yeah, let's keep it that way. So I have this tabs block. And I can, I can write, for example, first tab. And I can say, yeah, second tab. And this is, this is a standalone standalone block. What it does, though, is it controls the form on screen. So you can, you can, you can edit in a, let's say, fake tab behavior using this block. And yeah, you can, I mean, you can the challenge that I took, because this is a block that's mostly been done for fun. And mostly, as a, as a technology demo, things, let's, let's see what, what we can do with, with the Volto in itself and what we can do with the current API. So if I want to, like, finish the current accordion, I just had another accordion here. Don't use this in production. So it's, it's open source code, but don't use it in production. It's just something to help find it. I didn't fill in the title. And many thanks to my colleague, Aline, who made sure that that validation is shown. Okay, so, well, you can see that everything, everything below product overview. All right. Oh, sorry, under this stage, because that's where I put the other tabs block. So it works. Okay, so let's see what else we have. We have as a pattern, presets of multiple blocks. And you can imagine, for example, I mean, you've seen Aline's layout presentation, and you saw that you can, you can define a content type, and you can set a set of blocks that would be the initial layout for that content type. But I can also imagine, for example, having a palette or a palette of blocks that you can produce at the global level, and you can just pick and say, okay, maybe I want, I don't know, like, yeah, let's, let's, for example, take one of these. Like, I want something with three columns, right? Yeah, I want a chart on the side and some text on the other side. So this would be not a single block. This would be not something specific for content type, but something that you could potentially have site-wide, and you could potentially just drop in your form. And, and that would be kind of like the equivalent of dexterity behavior, but specific blocks. Okay, so what do we have? Block development patterns, and one of the main pattern that we have used in the last year when we've developed blocks was this client-side forms. And I have explored it a little bit in the training, in the in the Voltaux Adams training. And I think it's one of the main patterns that we have, let's say, developed in the last period of time. And why, why is it important? Because once we express our blocks in terms of schemas, then we can also make them make the blocks extensible. And we can, we can have blocks that are reusable because of this. And you don't have to rewrite, and you don't have to create every time your blocks. And I mean, like, probably my biggest pleasure is to create a new Volta block. And, well, this would make it would make the fun a little less a little more rare, but yeah, we need to have this. And that, that schema for blocks, we derived it from, we derived it from the Plongrest API JSON schema. I mean, we use the same schema, we allow ourselves to, to enhance on that schema. And we are also thinking to allow passing directly a component here. And but more or less, I mean, the main idea is that we want to express our block components in terms of schema, and then use the schema to, to derive our block editing forms. And to use, to use that schema and generate the block form. I mean, and this is not specific to blocks for block forms, you can use them anywhere. We use this in my form. And if you look in the EA add arms, you see it everywhere, this in my form, we use. And I think as a, as a pattern, it's important to use it everywhere because it forces us to express our needs in terms of widgets and widgets are by default reusable. And another one thing to be aware of is that our schema is, you know, we, we are living in reactable and react means reactivity. And we don't, we don't have right now, but defined way to, like to be in widgets data to, to, to do all over complicated wizardry that we're doing in dexterity in auto forms in ZC forms and everything else. But what we have is reactivity. And that means that if we're mutating the schema, automatically, our, our components on the screen will, will update and they will match the mutated schema. And for example, and that means you can, you can, you can leave the schema to be as dumb as possible so that it's easily readable. It's easily changeable and just move the, the code that makes it specific, move it close to the actual block implementation. And this is shown by having the table schema instantiated and then just mutating whatever we need. And yeah, back to, back to the idea that we express everything in terms of form that makes everything, every type of interaction, we want to also express it or not every time, but as many as possible, we want to express them as widgets. And we have, I mean, probably maybe this code should be moved to VotoCore. Probably we, we should add some other widgets to it as well. Right now there are four widgets, I think. One is shameful, so I'm not going to show it to you, but the other three, three widgets are really nice. And this is the inline list widget, widget, you can think of it as a data grid field inside load. So you can define a schema for one row or one object. And with this widget, you can instantiate that schema. So basically create multiple roles and you can change your values for them. And if you want to explore this, what you can do with this thing, which is great, I think. And yeah, I love it. And you can reference the VotoAddon's training and it's online material. So even if you haven't participated, even if you don't have the patience to sit eight hours through, I wouldn't have, just go to the text information and see how it's done. Then we have inside mapping widget. And again, back to the idea, but everything is a widget. What we're... Yeah, I shouldn't let this thing trouble me. So back to the idea that everything is a widget. We have a mapping, like I want the agroecosystems to use the color yellow. And I want forest to use the color whatever, whatever color it is. So we are mapping keys to values. And another one, which is a little bit, let's say, stranger, I call it an object by type. And you choose the type of information that you want with it. And we use it, for example, in VotoSlate for the, for the link implementation. And it makes sense there. And let me show it, for example. Yeah. So like here, if I click on the link, then I want a type of, a type of information to attach to that. And yeah, sure, we have the possibility to make a smart widget, like just to allow the editor to put something inside and we can guess what it is for sure. Just... But here, for example, we say, okay, you can do either an internal link. So that information looks in a certain shape or an external link, right? Or an email address. And based on the choice of the tab that you have done, then the shape of the object would be different. So yeah. Next. Widgets, yeah, this is an important thing to make a widget a proper widget. They should follow the widget protocol. And the on change thing is like really straightforward. But if you're developing new widgets, try not to mix the block properties, if possible, because that would make the widget hard to integrate afterwards. Okay. So another development pattern that we have, let's say, understood or extracted. And we see that it's really, really important is that is that of the compose behavior. And this is something that React provides by default. And it's like the main thing of React. But when you start with React, maybe you don't realize what you have. And you don't understand what power it gives. And you don't understand how to use it. So I want to stress the importance of this pattern. Like, because it leads to smaller components, reusable code, easier, better testing and code that is easier to maintain. And how you use it, it's this line of code. For example, I would have two behaviors. Let's call them with block extensions with file data. And these are taken from the training, from the botolidons training. And we wrap them around the table block component that's provided as an example here. To understand the higher order components and to understand how these things work with block extensions, you have to understand the concept of higher order function. We as Python developers traditionally use them. We can pass functions as arguments. And we can return functions. And the decorators, for example, we use them all the time. Not so much in, not so much in, actually, we started to use them with the component architecture providing some registration as decorators. So, yeah. And so moving on to understand how the higher order components work, we have to understand the basic concept of composing functions. So, for example, if I have a function called add and takes x and y as parameters and returns those two parameters added, then we can make a function called add one where we kind of like hard coding one parameter. And next, if we move, for example, to a React component, really basic, we can call it footer. And it just does footer stuff. And then we say, hey, I want a particular type of footer. And we're just declaring a new component that returns, that just uses the footer but hard codes. One of the properties and so on. But if we take the concept of wrapping a component inside a particular behavior, we arrive at the higher order component concept. So now we can do with red background as a generic concept or as a generic functionality, it wraps a component, any component, we call it wrapped component, and that's a convention. And yeah, we will return the wrapped component. And in this case, we no longer hard code the component footer here. We received it as an argument. And I mean, these are some pseudo code implementation. I don't think we need to spend too much time on them. But yeah, imagine that you have a higher order component or behavior that fetches data for you so you can focus or you can keep this code separate into a separate piece of information, a piece of code that doesn't, that you don't copy paste around in your blogs. You can really focus on making this as tight as possible and as best as possible bug free and so on so that it's easy to reuse in your other code. And for example, another another pattern or another higher order component would be the extensible blocks pattern. And this one, for example, you would just wrap your block and it will inject it will it will look up for extensions that are registered for this type of block. Like so basically, we import the blocks registry from the config and then we say, hey, the extension is blocks config by my block type. So I need to I need to be able to wrap the block. Yeah, so then I find whatever extension I need and I then inject them as property to the block. Okay, so another another really, really important concept to understand and I think it's a key concept to understanding or to widening, widening, understanding of Voto is the fact that Voto blocks are extensible and components are extensible. And there's this add on called Voto block style. I'm going to show you the action right now that doesn't really need trick in that it traps all the Voto blocks. It doesn't customize anything about about Voto, but it is able to provide styling for any Voto block. And for example, so basically, let's, I don't know, let's take this table. I'm not sure if it would work. But Voto block style adds one button here, the style palette, not the base best place, maybe we will improve this. But basically, it provides that button. And with this button, you can, it doesn't work for tables because they hard code the background. Let's try it with something else. Yeah. And you can, you can, for example, make text bigger, smaller. And this is, this is something that you can reuse as a generic styling framework for any, any block. And there is a framework inside it. So here, for example, it's empty, but you can register, you can register style, like, and I have it integrated, for example, in these slidesheets. And for example, yeah, this, this block and the background on it is created by this block style. So you can, you can have a palette of...
Distilled from our experience developing addons for Volto, we're showcasing some patterns that greatly improve development speed, provide uniformity and ensure future extensibility of Volto blocks.
10.5446/54791 (DOI)
Back everybody, we are here with Ellen Renea from ODUEB, who will be speaking to us about Volto Dexpair Schema and Layout Editor. This is a bit of magic that has been added to Volto. I'm looking forward to seeing how it works. It was magic in Plone and it's going to be magic in Volto. And as many of you know, Ellen is a longtime contributor to Plone, a very important contributor from ODUEB and he's done a range of things. If you look in his bio, it looks like a whole laundry list of technologies and he's spearheading the Volto work that ODUEB is doing such important work with. And so with that, over to you, Ellen. Thank you, Kim. Hi guys, nice to be here and I want to thank you and to congrats the organizer of this conference. I mean, the Plone community show once again that we can organize excellent conferences in any situation and in times of pandemic and yeah, so let's see what I've prepared for today. Let's start with me first for the people who don't know me. I'm working with Plone for more than 15 years and I mean this community for more than a decade. I work with ODUEB and the European Environment Agency and we did a lot of contribution to this community and we we've done, we have Plone add-ons. We, from 2015, we dockerized all of our stacks and we I'm actually the maintainer of the official Plone docker image and now we're in the Volto world and we started last year at the Plone conference. We saw Volto and we said this is the way and we started working on it and then we realized that there are some missing pieces in the Volto implementation at that time and we, our client and we said okay this is our chance to to bring the good things into into Volto from Plone Classic and we work on content types, textuality content types in Volto and Plone 6 and we said let's bring back the archetypes. Now that's actually a joke. I can see Philip saying no don't do that and that's true. Our archetypes are dead and we will do dexterity content types and what we did, what we have in dexterity content types, dexterity content types are a set of metadata so the content type is a definition of an object, of any object in Plone. So we will refer to that as a scheme because it's a schema and Volto introduced last year the blocks and we said let's refer to that as a layout because it's a page layout. You can do, it's not limited to layout but in this talk we will refer blocks as the layer and then we have behaviors and we, with the Python developers we love behaviors in Plone and we do schema with them 90% of the time. This slide is actually for the front developer because I know there are many at this conference and you will hear this term schema at this conference very often and it's normal because in the database you have schema and so when you're talking about forms because here we're talking about forms, schema is the best word to describe forms. So let's see how do we do schemas in Plone. We have a tutor web schema editor so in Plone 6 and Volto we have, we can have a generic double file which is XML and we have behaviors and I will, because this video didn't, wasn't shown before at this conference I will quickly play it so just for the people that don't know how a schema is done in Plone. So why do you need a schema? Because now we have blocks but there are some content types that don't, cannot be done only with blocks or you have, you have some metadata like the color of the, of the bike in this case or the, or I don't know for a book you have the ISBN or these are metadata for a page you have the publishing date, you have the key order, you have this will be used in search and categorization and all these other tools that, that depends on schema and and collections and yeah you cannot, you can do a page only with blocks but for, for largest institutions like universities or agencies you have indicators, you have data sets, you, you, you rely on schema and on metadata. Now as you can see yeah you cannot, choices I will just skip it. Let's see yeah then you can add a new, let's see here you cannot, so yeah if you're building a, an e-commerce site like this example you need, you need the data okay. And I should go full screen. So as I said you can add a schema through the web then you can export it to XML and have it, have it in the import ring, reuse with generic setup or you can have behaviors and all of these have one rest of the BI and that's, it is the same for all the, the case so this is the JSON file and yes one six and Volto's love schemas and how do I know that because in Volto we have five, four components in the core and we have, we have also, I don't see your, if you've been under various training you know that we have this subject widget that is based on one schema and the schema widget is the one used to define the, the in the control panel. So yes this is all about now for now about the schemas so I think everybody's eyes here because of this layer I think and why did we, what, what's our vision about this layer so Tiberiou Kim and I will take this, this slide from Victor's presentation and site Tiberiou Kim. So yes through the freedom that they provide Volto Blocks are a foundation for innovation that enables one to step in line with the latest state of the art for the development and yes people are, let's give power to the people right, the freedom comes great responsibility and I didn't say that and the Ronald Reagan did so yes we, when we started using Volto we had this first layer, first layout of the page and said okay let's give that to the users but then when you start adding blocks you'll get this kind of cordon with many blocks and yeah the user just don't know how to, how to proceed also if you give power to the people and you saw the last last evening green block addon then you can end up with the layouts like this one so yeah we give the people the power we give them, we need to trust them to give the power and then we wanted to simplify for the user or to distribute this responsibility to site administrators and editors and so we said okay we have a block what we can do but what properties does it block so you cannot new block you can delete it you can change the place order and you can change position and then you can change the you can make it read only or edit them okay so let's add this to the control panel because it belongs to there I mean we have to the web schema editor then let's have to the web block control panel so we added this to the control panel and the blocks also have has a scheme I mean the block is an object and it has a schema is defined by a scheme so this this form is actually generated by a scheme and yeah because this video was already showed two times at this conference I will I will try to do a live demo with the more advanced layout editor yeah this is the uh yeah web page and you can see uh before I do the demo for consistency you have to to to give the users to give a layout to two pages and and for example you have the title description and here you have a document byline and then but let's see where is my vortus site so yeah this uh this I have a vortus site with uh with some addons installed and if we add for example a book okay book and with uh with the layout editor so here I have a book title here I can say and here I I can insert some I can connect some metadata with the tinder so I can say the mission date and here I can say it was the author creator and then here I can say this is a key box or docks okay yeah I can see the the you can edit this but what what I wanted to show you is that um let's see here so yeah let's add a cover for this book then we can add some curves yes here you can see the the sidebar is different because it's using another scheme so here I can say I can I can update the the placeholder the helper text for all the the rich text inside here I can add some more rich text instructions and then I can allow blocks so I can say okay here you can add only images you can add slate text you can have either divide now this columns block has some settings we don't we don't want that to the editor to be able to edit this column so to make more columns so it can make it only settings we can have fixed position we can have it required can make this all required this one can also be pretty only actually it will be pretty only and so the editor will be able to add content only in this area I cannot more I'm on accordion and so here's here's and for this one I have some more settings like redoni settings and we don't titles and yeah that's it and now if I had a book I cannot title this one should be read only and I cannot and yeah this one can be also read only so yeah you can and yeah here we said that we cannot only image and another text so yeah let's see if I go back to my slides let's do that yeah now I can do with this I can basically export this one and have it as a behavior here so I had the layout fill set it blocks and blocks there and then I can just say the default values for this layout which is this one items save it because yeah you can we can do that through the web but we are part on developers and from our experience we know that we need to persist these settings on this can make them very predictable and now if I go to the size setup on another content type I can enable this behavior and then page can do that also under folder so so yes the layout the layout make the layout is using the the plon default for dexterity schema so there's no magic in there just we just used the default which is there was there and and yeah I already showed that now yeah you can you can do you can see more on the github on the port volto check the collective awesome for for the new addons and please update the collective awesome if you have addons or work done in in volto check out trainings on that work and the tibetius volto addons training to see how because I cannot stress this more than we should use schema and schemas also on the front because this or on the long term will give off will give us more benefit than than stress and yeah if you have questions head to community plon.org and thank you this was this this is an effort and was supported by the european environment agency and thank you all the web keep constantly and all the guys that we we supported that supported me in the past year with this pushing this feature into the core and I hope you you make great things with it and I hope this this story will be will be continued by the European community and at the next conference we'll see amazing things done with this layout and blocks and all the volto stuff thank you very much thank you Ellen that was pretty mind-blowing and I can see the chatter in the slack channel people are really impressed and in so am I to be able to do that kind of round tripping is pretty mind-blowing I see we have a couple of questions in the Slido the first one is what do you do if your content does not use the core meta metadata title or summary for example doesn't use the core metadata like can you please repeat the question because yeah it's what do you do if your content does not use the core metadata I think this might be something that we could if yeah we can we can decide how we can we can discuss it in the face to face yes yes yes also a second question was are the schema created in the layout editor searchable in the catalog yes I mean the blocks are searchable in the catalog as far as I know and the schemas basically yes I mean you can add an index in the catalog and you you have the searchable metadata okay I guess that's it for questions but I encourage everyone who's watching to join the face to face and keep talking to Alan and I really appreciate hearing about this wonderful development you've put together Alan thank you very much and thank you for your presentation thank you very much given thank you see you in the face
Through the Web Dexterity Content-Types with Schema Editor and Blocks Layout Editor
10.5446/54793 (DOI)
All right. Thank you very much, Ericko. And let me share my screen and make sure that's all working. Well, just tell me if it's not working somewhere. Okay, great. So WTA and Plone after 13 years. That's what we're going to talk about. We're going to start off by giving you a little bit of information about WTA. So it stands for the Washington Trails Association. They have been in existence since 1966. They protect hiking trails and wild lands in Washington State, which is in the northwest corner of the US. It provides members and general public with really extensive information about hikes in the region. So the membership of WTA is very active. This is the homepage and the powered by hikers. Thing is absolutely true. The members submit trip reports and photos. Members can make changes to the official hike descriptions. Members volunteer for trail maintenance work parties, which we talked about in a previous presentation yesterday. So now I'm going to turn it over to Jesse to give you some of the details about the history of the site. Yeah. Hi, everybody. The site was originally built by One Northwest, which later became Ground Wire. And a few of you may remember John Baldobiezzo. He was the primary developer on this project. And I think at the time it may have been One Northwest's most ambitious plone site and certainly one of the most ambitious. After Ground Wire closed in 2013, Steve McMahon and I took over the responsibility for hosting and for ongoing development. Jazz Cardi came on board in 2014 to implement a bunch of features collectively called My Backpack, which Sally will describe shortly, and also a volunteer management system, which we abbreviate to VMS. And that was implemented in Pyramid. And if you caught our presentation about Pyramid off on, I guess it was just yesterday, that project was described a little bit there. So an immediately notable thing about the site is that it's big. So there are 240,000 plus members more every day. We've got a large ZODB, a lot of blobs, and even more blobs that are offloaded to S3 storage. Some of the content type figures, trip reports, which I'll talk about, 182,000 of those, 372,000 images, and plone form gen forms, which is kind of an interesting stat, 521. So a significant portion of the site that as a developer, I think about less, but as a hiker, I use regularly, is comprised of a large volume of really great content about hikes in Washington state and about hiking in general. So they have safety guidelines, exercise regimens for hiking, trail rules and etiquettes, et cetera, et cetera. For a long time, the site pretty much stood alone in the types of features it provided for hiking enthusiasts. Here we see a hikes search form where I can search for hikes with all kinds of profiles. Like for example, if I want to hike 5 to 12 miles with no dogs, where I don't need any kind of paid trail pass, I can do that with the search form. There's a lot of metadata and you can get pretty specific about what you want. These days, there are a number of competing sites, many of them commercial ventures. All trails is a prominent example you might know if you're interested in hiking and live in the U.S. Where WTA still retains a big head start as in content collected from users. The main conduit for this is the trip report. You go out on one of the hikes listed on the site and afterwards you either log into the website or you use the mobile app that I'll be discussing a bit later. And you write a report of your experience, including things like whether the trail was washed out, whether there were trees down or the bugs were unbearable, that kind of thing. These trip reports are searchable and many of the site users take advantage of the trip report search even more than the hike search since its current information and the personal experience element is compelling. Of course, trip reports include a narrative body text element, which you can't see here, and up to four photos. The enormous number of photos the site accumulates every year has spun one of the major technical challenges. And David Glick came to the rescue by writing an integration with Amazon S3 that the site uses to offload large original images. Okay, now we're going to bounce back over to Sally, who's going to talk about the My Backpack features. Thanks. So the My Backpack feature is based on sort of a supercharged member profile, clone member profile, which includes a member's personal information like name, address, phone, that kind of thing, household information, specifically who are the member's dependents. This allows groups of members like parents and their children to be related together. And the list of saved hikes for the member when members are browsing hikes, like you saw Jesse showing you that search screen with the results, they can save the hikes they're interested in to their My Backpack. The dashboard, we're looking at the My Backpack dashboard here, shows lots of information and links to the member's dependents and saved hikes. Just talked about also the photos they have contributed to the site. That banner at the top of the dashboard displays three of their contributed photos sort of cropped and arranged in that little pattern. It displays a number of trip reports and the number of their upvoted trip reports. So that's a trip report feature which you may have or may not have noticed when Jesse was showing the trip report search screen and results screen. There's a little thumbs up symbol on the trip report. And members can upvote each other's trip reports. Members with lots of, sorry about that, members with lots of upvotes earn badges. So work parties, it also describes, these are managed in a pyramid based system, which we described yesterday as Jesse mentioned. Members also earn badges participating in the work parties. Any badges earned are displayed in the My Backpack dashboard. Members can see each other's My Backpack, but only the information that a member has decided to make public. So this My Backpack dashboard is implemented as a custom view for the clone member object. It has access to all the member profile information, which we talked about including dependents, save types. Then it performs catalog queries to get trip report and photo information. And it performs a Salesforce query to get the work party information because the work parties live in Salesforce not flown. So users save types can be shown in a list view like here or in a map view like here. All of the personal and household member information we've been talking about here sync to Salesforce, which is WTA's constituent relationship management system, as well as summary information about their saved types and trip reports, namely the number of their saved hikes, the number of their trip reports for all time and in the last year and the number of uploaded trip reports. Okay, now over to Jesse to talk about that Salesforce integration. Okay, so since the WTA staff that are responsible for tracking membership and other forms of engagement work, they're primarily spending their day working in Salesforce. A number of the member attributes that ultimately impact membership features on the Plone site are actually managed in Salesforce on contact records. Plone member records hold the Salesforce contact ID in one of their fields and the reverse is also true so we can reference each from the other bidirectionally. On the Plone side, these values are copied to the Plone member data objects and used in a variety of ways. For example, for members who write a lot of trip reports, a top trip reporter badge will appear next to their handle on the website, as well as whether they're a paid member. You might have caught that on a previous slide that showed the trip report in detail. We also have a pass group plugin which assigns members to Plone groups based on values that originate in their Salesforce contact record. Yes, so another technical challenge is keeping information in sync between the Plone site and WTA's Salesforce CRM. Since many of the values stored for members like their email address, for example, could be changed in either system, the sync has to be bidirectional. We use a system of hashing aggregated values to know when one system has updates that should be applied to the other. For member information, there's a combination of immediate real-time syncing and nightly batch processing. Most of the syncing is done in nightly batch jobs, but here's an example where we immediately send your most recent login time to Salesforce every time you log in. You don't see it here, but this is called from a Plone event handler for the login event. And let's see, you can also see we've got a function decorator here which takes care of all the Salesforce API retrying, and it will retry a few times if it fails initially. So the other category of member sync is a big nightly batch job. Every night, all the current Salesforce member records are queried, and we use the stored hash I described before to see if the Plone member needs an update and then ignore or update as necessary. The code example here is pretty hand-wavy since a lot of the detail is elsewhere, but this gives you the general flavor of what we're doing. In the other direction, some basic trip report information keyed by Salesforce contact ID gets sent over to Salesforce. In this case, we're using the Salesforce bulk library to help with this. All the sync, whether real-time or batch, all happens with celery tasks by a collective celery, collective dot celery. I think we may again have David Glick to thank for that. I forget. Okay, now back over to Sally who's going to talk about the site's integration with Mapbox maps. Thanks. Yes, we have David Glick to thank for quite a number of things on this site. Maps are critical to the WTA site as you've already seen. Maps are part of my backpack and the Trailblazer phone app, which we're going to talk about in a minute. The most important map on the Plone site is the Hike Finder map. The goal of that map is to give hikers lots of control when browsing and searching for hikes by taking advantage of the very rich dataset about hikes that is managed in the Plone site. The Hike Finder map was designed and implemented in Mapbox, and the same map, the same Mapbox map forms the basis for the MyBackpack and Trailblazer maps as well. Now, one of the most interesting things about the Hike Finder map is that the design and Mapbox work was actually all done by volunteers. WTA has an amazing volunteer community. Not only do they contribute trip reports and work on work parties, but they contribute to the organization in many additional ways, including contributing their technical expertise. So, they did the map and the result was integrated and deployed by Jesse and Mark Rue Wiley. This is a screenshot of Mapbox Studio, which is the tool that the volunteers used to develop the custom map style. Here's the Hike Finder map showing the filtering criteria that can be checked. The map groups results to declutter the map, instead of showing like a gazillion individual little dots. You can see it's grouped with the numbers shown, and either zooming in on the map or clicking on one of the clusters breaks the big groupings into smaller groupings and then eventually into individual hikes being shown. You can also search the hikes, search for the hike by its name there in the text search box. By clicking the green arrow at the top right of the Hike Finder map, a gallery of photos is shown listing the hikes that are displayed on the map. The order of the hikes shown in the photo gallery is random-ish. I'll call it it's not truly random, but implemented in a way that it's not likely to repeat. That behavior is intended to avoid always showing the same popular hikes at the top of the list. This is an attempt to prevent trail crowding by ensuring that everyone doesn't always choose the same hikes to go on. So, some trails I should say to hike on. Here are the Hike Finder maps slider controls that allow filtering by elevation and length. As you can see, you can adjust both the low and the high end of both those things. So, now we're showing just the higher elevation maps in that same geographic area. Now, here's the Hike Finder map showing a selected hike. You clicked on it in the map, and here's some details. You can scroll to see more summary information down here, and then click on the hike to see all the details. So, the previous version of the Hike Finder map was implemented in Google Maps rather than Mapbox, and it was also developed by a volunteer, which is pretty interesting. And that older Hike Finder map is still preferred by some people, and it is still available on the site. So, now over to Jesse to talk about Trailblazer WTA's phone app. Yes, so Trailblazer is a mobile app for both iOS and Android, and it had its first major release around 2015. It's really fun because it combines all the same hike and trip report information that's on the website, but it can also layer in the geolocation features from your phone. So, this lets you do things like see your position on the maps and search for hikes near your current position. It also caches hikes you've been viewing so that you can continue to view them without an internet connection, and this is obviously very helpful when you're out hiking. The actual mobile apps were developed completely by these amazing volunteers we've been talking about. I developed the JSON API that they use on the Plone site. This was probably at the very beginning of the Plone REST API project, so it does not make use of that infrastructure in any way. Using it may have saved me some time, but I think there are also advantages to having complete control over the API and the data served, mostly because I could provide exactly what the app developers wanted instead of asking them to adapt to the API, and they were volunteers after all. So, this will be a whirlwind tour of the app itself. Here we're looking at the home screen, which includes the main menu at the bottom, and a few canned search options, typical popular search options, including the near me option. And this is what the hikes near me looked like from my house. And in addition to the canned searches, there's also an advanced search form that pretty closely replicates the form on the website. We did make some simplifications to the inputs to be more idiomatic for the phone. This is the Trailblazer version of the HikeDetail page. There's a scrollable image gallery at the top, and then very similar information as to what's on the website, just with a slightly different layout. TripReports use the same idea. It's the same data as the website with a slightly different presentation, and we've added infinite scroll and some other phone-friendly usability tweaks. So, unsurprisingly, you can also submit TripReports from these apps. There's an initial form for the metadata. Again, this form very closely matches the web form, and even the selection menu options are fetched from an API endpoint. And then with that done, you get a second page where you can enter the body text for your TripReport and up to four photos. There is also a view of your saved MyBackpack hikes, and these can be added to or removed on the app also, and then the list can be synced back and forth with the website list. There are many more features, too numerous to mention, but I will give a quick description of a few. So, members can submit proposed amendments to existing hiked descriptions, and these then go into a review queue, and WTA staff can look them over and automatically merge the changes into the canonical hikes if they see fit. They also have a well-developed system for managing memberships and donations, including e-commerce integration, which they do via Classy. The challenge of managing the ever-growing, so not only are there an endless supply of user-submitted photos, they're also getting bigger all the time because that's the nature of phone cameras, and this led to the development of collective.s3 blobs, which was a product of David Glick's brain, yet another, and there is a talk at the 2017 Barcelona conference specifically focusing on this tool, so if you're interested in that tech, there's more to learn by reviewing that talk. Okay, so we're going to finish with some quick pros and cons of PLONE for this project over the years. Excuse me. I would say basic stability and security have been great. There have been very few instances of isolated poor performance, and even these in general could be traced to some poorly configured catalog query or something like that. The number of users and the content volume has increased more or less linearly for 13 years, and PLONE has handled this very well. The various custom content type systems have made it pretty easy to model WTA's particular domain, and the PLONE workflow system and permission systems are hard to beat. So the other side of the coin, upgrades have been hard. I know this is a pain many of us have endured, and upgrades, sheer numbers matter, since an upgrade attempt will take many hours when you've got a huge number of content objects, and many hours is actually a best case scenario for WTA. Based on some experience someone else had made, David Glick put together an in-place archetypes to dexterity migration system that brought the time down just two hours without in-place migration. The migration was taking literally days. A multi-day edit freeze on a high participation site like this just isn't going to work. So the in-place migration was really critical despite its craziness. Jaskarda has used this in-place migration on a number of other projects now, and it's worked out pretty well. We've also been burned by subtle API differences between archetypes and dexterity content types, which sometimes took a while to come to light. So we were doing retroactive post-migration fixes for a while, but I think that's behind us at this point. So on the whole, a lot more good than that. And that's all I've got. So at this point we will transition over to taking questions if people have them. I do have a question. Tell us more about this in-place migration solution you have in-house. And how is it different from what do you have in the community? I'm probably not the best person to answer that question. I've used the migrator, but didn't have anything to do with its design. I just, the reason I applied the word crazy is because it does things that I would not have ever dared to do, like, in-place overwrite the class of a persistent object and things like that. But the main benefits, I guess, I'm sure the same benefit as other similar tools is that you don't have to copy the old object into a new object and then save the new object. It's just, yeah. Yeah, a comment here. I looked to my left because the moment you mentioned David Kleeckin crazy in the same sentence, he complained. Maybe David can join us in the jitzy after this and answer any more detailed questions. I would say that's a really good idea. Yeah, because I certainly can't say any more about it. And Alec. We do not have more questions in sliders, so feel free to make the closing remarks. Okay, well, thanks everybody for listening to us and thanks so much for this great system clone that has really made a lot of things possible for WTA and we'll join you in the face-to-face meeting soon.
Serving hikers in Washington state, the Washington Trails Association protects hiking trails and wild lands and provides members and the general public with extensive hiking information. A Plone site since 2007, wta.org has extensive custom features, 240,000 members, and an enormous amount of content. We will take a tour of some of the most interesting features of the site, including the Salesforce and Mapbox integrations, iPhone and Android apps driven by a custom API, a process to crowd source corrections to hike descriptions, and a culture that has allowed WTA to leverage the expertise of volunteers to implement significant website features.
10.5446/53854 (DOI)
Actually, there are quite a substantial similarities between Nathias model and our model. There is, you have different focus, I would say, in what we were interested in. So that's my two colleagues here. Danny Gashke is working for an NGO now for some years and is trying to combat hate on the Internet or something like that. And he's a social psychologist by training and a lawrence mathematician and computational social scientist from Bremen. And basically, Daniel Gashke had the idea at a conference to try to integrate all the different levels on which influences on phenomena such as ecochampus and filter bubbles can take place into a single model, maybe just for clarification. So in our paper, we suggested using the term filter bubble for something that happens within an individual, something like the selection of information that you receive from all the available information on the Internet. So how narrow or how wide is your information diet? And we would use the term ecochamber for something social for like-minded individuals sharing ideas, exchanging ideas, building a kind of bubble on the Internet where they exchanged ideas. Yeah, so basically our model tries to integrate processes on the level of individual minds, something like confirmation bias or the other terms that are used to describe similar phenomena. So preference for information that is similar to information that is already somehow stored within the mind or active within the mind. We want to integrate processes on the level of groups, something like social homophilia, group polarization, or we talked about that yesterday. And we want to integrate processes on the technological or societal level like the way in which you receive information if you receive it from Mars, media, or from recommender systems that are typical for the Internet. And in my presentation now, I will not talk about all the simulation that we ran with our model. I will try to just exemplarily show some simulations on the effect of such recommender systems. So I will show my model in a second just to explain a bit of the surface here. I think those slides are a bit more handy. So as in Mattias model, this would be sort of say the outcome of our simulation would be how this world changes, which is inhabited by smileys of different colors, which usually have no meanings in the presentations in the simulations I showed you. They have no meanings. And yeah, this would be so to say the outcomes are displayed here. We can on one side look graphically at the kind of world that emerges. We can think about which possible worlds could emerge from this pattern. We could think about a world where everyone after some rounds of simulation has the same opinion where we have kind of consensus that can be more narrow or more wide. We could think about a society that is fragmented, where our smileys form different kinds of bubbles, different types of groups. Or we could think about a society that constantly changes where no stable pattern emerges. So that would, for example, be, I don't think they are all we had, but these would be, for example, possible outcomes that our simulations could have. As in Mattias case, we were also interested not just watching graphically and describing what is happening there, but also finding mathematical measures to describe what is happening in those simulations. These would be our three measures that we used. So here the smileys would again be our agents. Already know the term from Mattias model, so the people that somehow do something in the world. The gray dots would be pieces of information that they receive. And pieces of information are somehow located within this two-dimensional space. So we could think they are aligned somewhere along two political dimensions left, right, and something else. And then we also have a friendship network, which I will not talk about because it does not really play a role in our simulations. In the simulations that I want to show to you, we turned it off, but we could also define groups of friends among our agents where they preferably show information. So the outcomes were that we looked at the mean distance between the info bits after some rounds. The closer the distances, the more tightly knit the community is, the more we have consensus, the larger the differences, the more fragmented our community is, the more bubbles I have, and the more dispersed the society is. We have the mean distance between the info shares that shows us a bit also the fragmentation of the society. So the difference between people are the same piece of information. We would also meant that the mean distance among friends, but this is of course only relevant if you turned it on. So here we have some general set of parameters like the number of agents and the number of rounds with emulation, things like that, some of those parameters. And here we could also introduce that our agents at some point die and are replaced by new agents to enter some kind of dynamic into the system that can be also in some cases interesting if the pattern that emerges can be destroyed or changed by introducing some random change. This is our very, very small brain. So to say of our agents which is of course far less ambitious than in Nugget's approach, our agents just have a forgetting function. So they receive in some way information from the environment and they have a certain memory capacity. So in our standard setting 20 pieces of information fit in the memory and their position within this word is defined by the average of the positions of the different news items that are in the memory. And after some rounds when the memory is full, after 20 rounds if you have a capacity of 20 items, they have to drop one of the items that is in the memory and replace it with a new item. And the chances are higher that a memory item that fits in well with the other memory items is kept in memory than a memory item that is completely different than all the other memory items. This is defined by such a forgetting function that basically is two parameters. One would be threshold at which an item is identified as not fitting to the others. And the other parameter would be the sharpness of the curve if this is very steep or softer. So that's also relatively simple. Of course it would always be possible to make such a model more complex. So we could pour in some of the stuff, not yet, but in our simulations in this first step of simulation, so to say, proof of concept, we might be more interested in the type of news propagation and often it is simple if you're interested in something to keep the other parts of the model relatively flat, relatively simple. Yeah, then we have the thing that we are interested in, would be the way in which our agents receive information. Basically, we have fear main ways in which they do that. One would be just random, they are walking around and encountering some piece of information. One would be a central news propagation or receive the same information from a central source. We have the same probability of receiving information from a central source. And then we have two different kinds of recommender systems. One recommender system, we call it close recommender system, is working like they usually do based on similarity and for the backdoor also popularity. But they recommend information that is similar to the information that is already in the storage. And then we try it out also a completely different recommender system that constantly recommends information that is dissimilar to the information that is in the memory storage just to see if this has an effect and maybe some of the simulations can stop the mechanization or whatever of our society. Yeah, these are the different simulations that we run in this kind of proof of concept study that we published here. I will focus on these five which are concerned with the mode in which participants receive information. So we see it's always one of the two recommender systems that we have, either the normal one or the other one that recommends distant information and social posting is turned off or on. So I forgot to mention social posting. So if it's turned on, individuals can share information with others with which they form some kind of network to talk about that. That would also get a bit complicated. But they share information with others. Okay. Now we can switch to our model programmed in net logo which is a very traditional way of doing such agent based modeling. Young people like Nadja use Python. So here we have the presets for the 12 simulations that we are describing in our paper. So we could, we can now choose preset five. Here we have close information. So it's a normal recommender system. Social posting is turned off. So what would we expect somehow? I suppose this recommender should have an impact and somehow our society should fall apart into different groups. It's doing so relatively slowly. But it's unfortunately also that my computer is not very fast. But if you watch that now for some time, maybe a few minutes or so, so we see now we have 200 runs. I think now it's simulations. We are suspended where you use 5,000. So we would have to watch quite a bit. But we see already it's society is kind of starting to develop such groups that share similar news items and therefore are positioned in a similar place in our political spectrum. If I do the same and turn social posting on, if they can talk to others, the share information with others, then we can assume that should, of course it should happen considerably faster, which it does. So here we see already after 100 runs with a clear development of such echo chambers. Now one question would be, or one basic fundamental question would be if we have means to stop this process and somehow prevent this fragmentation from society. So if we turn social posting off and our distant recommender system on, that should prevent this from happening and you see completely different types of society emerges where we can watch as long as we want. So here we won't have this formation of echo chambers. We have some constellations that form temporarily, but after some time they dissolve and our agents moving somewhere else. So as I said, the position of the agent is always defined by the information items in the memory storage. And then we can see if we turn both on social posting and our distant recommender system, but the way we programmed our model, it's a bit of a fight. So it takes some time, but in the end the social posting is stronger in our model, but it depends, of course, also on how we model it. So it could probably also tweak it. So that's a bit different. But in our case, we have kind of whatever likely discussions between our agents, but in the end they will tend to form those echo chambers after a much longer time period. So just to show a few of the simulations that we run here with the focus on the information propagation, what is the goal of this? Maybe this is something to talk about. Why does one do that? What is that good for? I think one thing is, in some way, if you look at physics and theoretical physics, it's important also to think about how the theories that you need to explain or describe the phenomena that you're interested in must look like. So in physics, it's very popular to just play around with different theories, play around with different kinds of explanations for the world and see which ones make sense, fit in with something that we cannot observe, which ones do not make sense. And sometimes you encounter very novel, very interesting, very, very strange ways of conceptualizing something that you haven't thought of. And also other way around, you may also sometimes encounter the situation, the theories that you're actually people favor, that people have developed at some point due to the internal dynamics that those theory and developed just don't work, just won't explain the phenomenon question. So one initial question here is, of course, can we create all those different worlds that we want to create with our model if not our model would be under complex priorities or we would need to introduce some new things that we can produce the different outcomes. So here we can say, OK, we can produce all the different worlds that I talked about with our model. So if it's not under complex, it could be something that something is redundant in our model that we would have to find out in lots of simulations and we could throw it out, then we could also use a simpler model. And the next step of validation would be, of course, to now compare the things that we can simulate here, like Nadiya Diti-Meier study with actual data, with what actual people do, that we have not done yet for this simulation. Basically, we had a problem, we had a project running with a PhD student who had lots of Twitter data, but he quit. So basically, that's a bit on hold. So I'm looking for someone to analyze the data. But I can show something like that from a very recent paper of Jan Lorenz, my colleagues here, who used parts of a model for a new kind of model. That's also something that I think is very positive about this agent-based modeling community. It's all very open, all share their code. And you can often relatively simply import the code of someone, modules, for example, from Nadiya's model into our model of I swear there. So that's often relatively simple and create new models to test something. So it's a very open and very collaborative community. And here in the very recent paper by Jan Lorenz in Psychological Review, he changed the base model that we had a bit so that it is more about attitude change and that it is more meant to represent psychological theories on attitude change, also something like social contagion, consented theories, motivated cognition. And of course, don't want to go into the details here. But as we also seen in Nadiya's presentation, so that is basically what you would do. Here we have some distributions that were generated by the model in some kind of configuration and here we would have actual data from the European Social Survey from different countries. So for example, here would have the question of a representative sample of Norwegians on the question if European unification is a good thing and the general consensus is well, maybe, somewhere in the middle. And yeah, and similar distributions can also be created with this model. Or he would have a bipolarization of society. So European unification in Serbia, so should Europe go closer together and here we have very strong no, very strong yes position. And also something like that can be created. So here in Jan's model, so that would be the next step that we would also like to take for our model. And what we are currently working on and we'll do in the very next future is to take some elements from here, take some elements from this model and add it to change, focusing model and integrate that into our model and create what we would call kind of a topic fight model. Where we, like in our original situation, mainly focus on use items that we receive from the world. But yeah, we will attempt to stage kind of a topic fight between different topics that fight for attentions and somehow try to model something like new cycles and things that can be observed in the media. Like topics getting popular, shifting public opinions, then probably also things shifting back after some time. So that would be the next step and what we are currently working on. So I hope I was not too brief. No, I can also show you some smileys for some time. But thank you for your attention.
The ubiquitous availability of information in the age of social media and the increasing personalization of information flows are often alleged to contribute to the emergence of "filter bubbles" and “echo chambers”. In our triple-filter-bubble model (Geschke, Lorenz, & Holtz, 2019), we formalize filtering processes on three different levels: Algorithms, group processes, and individual cognitive and motivational processes. We used the NetLogo (Wilesky, 1999) agent-based modeling environment to analyze twelve different information filtering scenarios to answer the question under which circumstances social media and automatic recommender algorithms are most likely to contribute to fragmentation of modern society. We found that echo chambers can emerge as a consequence of cognitive mechanisms (e.g., confirmation bias) alone under certain conditions. When social and technological filtering mechanisms are added to the model, polarization of society into even more distinct and less interconnected echo chambers can be observed.
10.5446/54816 (DOI)
As you know, there is a, like, partisan polarization in the US about many scientific questions, but one of them is, which is very important, is climate change. That means that, of course, Republicans are less likely to believe in climate change than Democrats do. And one of the most widely accepted and researched explanations for this partisan divide is a motivated reasoning account that proposes that people follow their identities over accuracy. So in the sense of cognitive processes that's proposed by then-Cahen and colleagues that basically suggest that you use your deliberative abilities to convince yourself that the perceived opinion of the group is correct. So basically, if you think that Republicans think or should think that climate change is not real, then you basically use your deliberation to convince yourself that it has to be true. And the evidence for this comes from correlational studies. And in the y-axis, you see that the probability that people agree with the statement that climate change is happening. And in the x-axis, you see a measure of deliberative abilities. We call this the cognitive reflection test. It has three questions in it that basically measures how good you are at figuring out answers to certain mathematical problems. And if you see the liberal Democrats, it is, as expected, basically the better you are at your deliberative abilities, then the more likely you are going to be to believe in climate change. But when you look at Republicans, this is the other way around. So more than a lot of Republicans are less likely to believe in climate change. And this is the kind of CISOR-like pattern that this motivated reasoning account tries to explain that basically Republicans, the ones that can deliberate very well, I think very well, they are very better suited to convince themselves that climate change is not happening. But what I'm going to try to talk about today, at least the first half of my presentation, is to figure out if there is evidence outside of the context of global warming. Because a motivated reasoning account is not just a single issue thing. It should make predictions for all polarized scientific issues. And there are other polarized scientific issues in the US, not just climate change. Does all of them follow this kind of CISOR-like pattern that we see? That's the first thing. The second is I'm going to focus specifically on climate change and global warming and see what is it really partisan polarization that kind of causes this CISOR-like pattern. So in the US, we asked people a bunch of science topics and asked them if they think is politically divisive. And there's a lot about which people think is politically divisive. So according to the motivated reasoning account, when there is political division, then we should see this kind of pattern that we saw for climate change. And indeed, we replicate this pattern for climate change. But when we look at all the other issues, for Democrats, deliberative abilities, cognitive reflection abilities always increases scientific beliefs. So basically, the more able you are to deliberate, the more likely you are to believe in actual science. And this is pretty much the same when we look at Republicans, right? With the exception of climate change. So we see a reverse correlation there, but that's really the exception rather than the rule. In most of the cases, even Republicans who are more deliberate follow the science. So really, focusing on climate change is just basically focusing on the exception. And we find the lack of evidence for a motivated reasoning account in a bunch of other issues. Like there is no evidence for fake news. There is one way when they ask people whether or not Trump won the 2020 election among Trump voters, again, like the more deliberate ones were less likely to were more likely to believe that actually Biden won. And the same for COVID-19 misperceptions. So no, this or like, formula in the results. And okay, so it doesn't, it's not an effect that generalizes to other scientific issues. It's very specific to global warming. But what does this, what caused this kind of effect for global warming? And this is the point where I have to emphasize the importance of prior beliefs. So prior beliefs is basically specifically what you think about climate change when you come to the experiment. So do you think it's real? Do you think it's not real? Do you think it's risky or not? And it's different, psychometrically speaking, and both conceptually speaking, it's completely different concept than say partisanship, right? And when we try to correlate those two, we see that there is, of course, there is some correlation between prior belief and partisanship, but it's like 0.4. So psychometrically speaking, they are independent. And when we think about prior beliefs, why it's important? Because if you evaluate evidence based on your partisan motivations, that's basically, that can be considered irrational. But when you evaluate evidence in the light of everything else you know, that is your prior belief, then it is basically an updating. You take into account everything else, you know about the topic when you evaluate the new evidence or new arguments you see. And that can be problematic for, for, and that could actually, you know, be a little more, explain this kind of pattern we saw for climate change, at least in two ways. First is that people who are more reflective might be more likely to have different priors than those who are more intuitive. So that could interfere, like that could correlate with partisanship, and actually, that could be the cause of the Cicero-like pattern, and might have nothing to do with partisanship at all. The other reason is that more reflective people may also rely more on priors during updating. And that could do. So as I said so far, for this account, there is no causal evidence. And this is what we are going to try to do, is to find causal evidence either for the motivated reasoning account or, or this kind of prior-believed, driven deliberation account. We also tested a third account. I'm going to talk about that in a second, but let's just see the methodology. What we usually do is so-called two-response paradigm in basically there is an intuitive and either a between-subject or within-subject deliberative condition. And in the intuitive condition, I will show you in a second, but basically, we make people think intuitively. Basically we inhibit their deliberative abilities by introducing some strong working memory loads and time pressure. And in the deliberative condition, there is no constraints on thinking you can think as much as you want. And we give people pro and control climate change arguments that control are always like bad misinformation or at least has been debunked several times in the past before. And then we ask people how much do you agree with this argument? And so here is what the two-response paradigm is. So this is what people are presented with first. You got two seconds to memorize this pattern, and then we give you the argument. And then we ask you how much do you agree with this argument? And you have 28 seconds to read the argument and give a response. The 28 seconds was calibrated to the average reading time for the participant. So it's very, very strict. And then we reload you and ask you which one is correct. And then if it's the within-subject condition, we present you the same argument again and ask you how much do you agree again, but there is no constraints. Okay. All right. So what does, we want to studies like this. There is no important difference between the two studies, and they gave the same results. Anyway, so what does identity protective cognition account predicts? It's basically that for Democrats, when they are presented with a contrary climate change argument, deliberation will decrease their belief in it. Okay. So after deliberation, you are less likely to believe in it just because the contrary argument is not in line with your partisan motivations. And you see the opposite pattern for Republicans, right? So contrary argument supports their identity, their Republican identity. Therefore, when they deliberate, they must increase their agreement with those arguments. And for pro arguments, you would expect the reverse pattern. So it should be the Democrats who increase their belief, their agreement after deliberation and the Republicans who decrease. And there is one more thing I have not talked about is that the original dual process theory, right? Because that's also, there is an identity protective account, there is a belief driven account, which we haven't talked about, and then there is also a third account, which is basically says that you are trying to be as accurate as you can. So regardless of your politics, your prior beliefs or whatever, you will basically, you will basically just figure out the best solution. So you will decrease your belief in contrary arguments and increase your belief in pro arguments, whatever happens, whatever, whoever you are. That is not what we see. The results are actually consistent with none of these accounts, right? We find no evidence for partisan belief, for the classical dual process theory, and we find no evidence for the motivated reasoning account, right? Basically if you look at Republicans here and here, then it's pretty much nothing happens. So they don't change their opinion after deliberation. Democrats seem to decrease significantly their belief in the contrary arguments, which is of course in line with the motivated reasoning account, but Republicans is nothing happens. So perhaps it is really the prior beliefs that's in play here, and then we categorize people based on their prior beliefs, whether or not they believe in climate change, and then we see this. When we look at Republicans, they start to increase their belief in... Well, sorry, when we look at deniers, and not Republicans, deniers who doesn't think that climate change is real or it's risky, then they increase their belief in their contrary arguments. Residentized deniers actually decrease their belief after deliberation in the contrary arguments, and we find no effect for the pro-climate change argument, and I will explain a second why that is. But overall, this supports the identity protective cognition account over... Sorry, this supports the belief-driven reasoning account over the motivated reasoning account. But why is there no effect for pro-arguments? And we find that the effect, the deliberation effect on prior beliefs is highly dependent on whether or not people are familiar with the argument, and people seem to be much more familiar with pro-arguments than they are with the contrary arguments. This is a 0.7 correlation between familiarity and the interaction effect size. So people are very familiar with the pro-argument, therefore they already intuitively know what they think about it, so they don't have to think about it at all. Okay, so far no evidence for a politically motivated reasoning account. There is some evidence for the belief-driven deliberation account. And you could argue that in this experiment we didn't at all... We just manipulated deliberation, but we didn't try to manipulate either prior beliefs or political identities. This is what we did in the second experiment, basically we tried to manipulate political identities. This is an instruction manipulation taken from previous studies that showed that if you tell this to people, then they will keep in mind their political motivations, their parties, and identities when they think about the problems. And then in one between subject condition, people had to read this to make them think about their partisan identity while they were evaluating these problems. And this is the control condition. This is what we found. There is, of course, when it's concordant to your beliefs, then the CRT score, the deliberation score basically increases the argument, increases the agreement when it's basically... When it's discordant to your political identity, then it decreases, of course. That's basically the very basic motivated reasoning account, which motivated reasoning formula which we can replicate here. But one... We see at the motivated reasoning condition one people actually got this warning to think about their political identity. We see that it actually decreases polarization completely the other way around than you would expect for a motivated reasoning account. So it does have an effect, of course. It's just a negative effect. So it's not, as you would expect, from the perspective of the motivated reasoning account. So again, no evidence for a motivated reasoning account. So first of all, conclusions. Global warming is a very unique case where higher reasoning skills are associated with inaccuracy. It definitely does not generalize over other scientific issues. And reasoning seems to facilitate the coherence between one's prior beliefs and the arguments that they are exposed to. And then the third one is people with different reasoning skills may be polarized simply because they engage in different information environments. So if, as Steve already said, if you watch too many Fox News, then it might be the case that you are more likely to be exposed to anti-climate or science denialist arguments. Therefore, you are more likely to believe in it or develop a prior belief which is actually more in accordance with climate science denial. And in some, reasoning does not facilitate motivated reasoning at all.
A widely-held explanation involves politically motivated reasoning: Rather than helping uncover truth, people use their reasoning abilities to protect their partisan identities and reject beliefs that threaten those identities. Despite the popularity of this account, the evidence supporting it (i) does not account for the fact that partisanship is confounded with prior beliefs about the world, and (ii) is entirely correlational with respect to the effect of reasoning. Here, we address these shortcomings by (i) measuring prior beliefs and (ii) experimentally manipulating participants’ extent of reasoning using cognitive load and time pressure while they evaluate arguments for or against anthropogenic global warming. The results challenge the politically motivated system 2 reasoning account: engaging in more reasoning led people to have greater coherence between judgments and their prior beliefs about climate change - a process that can be consistent with Bayesian reasoning.
10.5446/54815 (DOI)
Maybe I should first introduce myself. I'm a scholar of media and communication studies at the University of Mannheim. So my main focus is on political journalism and how it shapes attitudes and behaviors. I did a lot of research on political populism in communication, but this is not what I'm going to talk about today. But rather I will be focusing on a new line of thinking that keeps me busy in recent months or years, which is the social media polarized. And my focus since I am a communication researcher is mainly on the filter bubble hypothesis, which is very prominent in our field. So I think this is not exactly what Stephen talked about. This is where the complementarity is. Sorry about that. That should have been animated differently. So I don't have to... Yeah? Did they just miss the last name? No, I don't have to. I don't have to type a gun. I don't think I have to talk a lot about this anymore because we already heard this in the keynote talk this morning. There is an empirical observation that especially effective polarization, as you said, and already showed us with some data, is increasing, not in all countries, mainly in the US. Maybe that is a little bit of a limitation also to my research because my empirical research is based on German data. And we can, of course, and this is maybe a point of discussion that I will come back to in the end, question whether that is actually a most important aspect for German political communication, whether there are polarization effects or not. But we can leave this open for the moment. Now, of course, the question is, and that is also the intention of the track that this talk has been placed by the organizers, what are the roots of this increase of polarization? And there are a couple of, I would say, lines of discussion in the literature. And I'm sticking pretty close to the literature from the field of political psychology, I would say, which is very dominant also in my discipline when it comes to arguing about polarization. And there I see three lines of reasoning, social identity-based explanations, ideology-based explanations, and more or less information exposure or information behavior-based explanations. And that is, of course, where the whole filter bubble argument is situated. So the idea is that social media might be driving polarization, and that is, of course, also an idea which is discussed a lot in public discourse on polarization. So we see a lot of newspaper coverage on the question of how platforms might have contributed to the development of polarization in our societies, and also a lot of non-academic societal discourse in institutions in the political sphere on the road of platforms here. And I think one very famous argument, and I'm very happy that this was not central to Stephen's talk, is the filter bubble hypothesis developed by Eli Perizer, a journalism scholar from New York. And he says that algorithmic content selection is based on users' political preferences, and because of that, the algorithm will show users what they are already interested in. And that means that partisanship consistent information environments might develop. And the argument is this does not stay without any effects. And the effect that is assumed is that partisan ties will increase further, and effective polarization will be promoted by algorithmically shaped content selection. Now the first problem is, the first empirical problem is that, of course, it's very difficult to study these kinds of questions. I think Stephen made that clear already. But there are some studies which have been trying to figure out whether filter bubbles actually occur. And I mean, they are quite heterogeneous in terms of their methodological approach. So it's really difficult to draw general conclusions from these studies. But I think as far as the evidence that I know goes, there is no really convincing evidence that algorithmic selection really creates those filter bubbles. So that people are really more and more confronted with only like-minded views if they are using algorithmic content selection systems. But I admit that this is a very open question. We are in an urgent need of more research about that. And now what I did, myself, together with a PhD student, Katarina Ludwig, is first a systematic lit review. And I think there might, there will probably be a lot of overlap, at least in the bodies of studies that we looked at. I mean, our question was a bit different than yours. We did not only focus on causal effects, but we said we focused on all studies that include social media use as an input variable, and then either effective or ideological polarization as it's usually operationalized in the political psychology literature as the dependent variable. And yeah, but we also looked at studies using cross-sectional data. So it's a bit of a different approach. And as I understood at your lit review, is also about other dependent variables. So it's a bit different in terms of its approach. And I think most findings are somewhat similar. This is also in preparation. The manuscript is not finished yet, so we are not able to submit it at this moment, but we are working on it. Now what are our first insights? First, my impression, at least, is if you use a broad understanding of polarization and look for empirical studies, then you find that the research landscape is to some degree conceptually blurry because studies, maybe that's a bit harsh, tend to an idiosyncratic understanding of polarization, but there are many different ways of conceptually understanding polarization and also empirically measuring polarization out there. So concise conclusions are, in my impression, difficult to draw. The body of literature that we ended up with were 88 empirical studies. We also had some hundreds in the first round of sampling, but we narrowed that down to 88 empirical studies that have been published and only 31 studies we found did investigate polarization effects of social media at the user level. Even though I admit we have to be careful about the term effects because it's not only causal evidence in the narrow sense that Steven's literature review deals with. Now we find that the most frequent ways of operationalizing social media use in that context is that either studies use survey measures of the amount of social media use, which is a very coarse measurement, I think, because it does not really tell us what people are confronted with when turning to social media and also does not tell us anything about what people do with social media. Then we also find a lot of experiments which actually look at the effects of pro-encounter attitudinal information exposure. And social media, in that respect, is only like framework. So they use stimuli, oftentimes also non-interactive stimuli, that have a Facebook logo at the top, but the same experiment could have been conducted without any social media framework around it. So we can question whether these effects that are observed in such studies are actually social media effects or not simply information exposure effects. And we found, because this was our specific focus, and we looked for that, only five studies investigating effects or relationships of using recommender systems with polarization. So this is really an area that is very central to the public debate, but is hardly being studied empirically, probably because it's so difficult to study it. Now which patterns of results did we find? First of all, my or our impression was that there is a heterogeneous landscape of evidence on all questions. So there are studies finding polarization effects. There are also studies finding depolarizing effects. And also null findings studies, or at least null findings on specific hypotheses within those studies. So it's not that you could really say there is one central line of findings that is really clear. Also our impression was, or our finding was that the occurrence of any social media effects is often moderated by the strength of partisan ties and also other political attitude or political attitudinal variables. So that means that not all users, and that is actually to psychologists of course pretty self-evident, not all users are affected by social media in the same way. Also we found that there is a lack of research from multi-party systems, which is our justification of course then to look at the German case empirically, but at the same time that could turn into a boomerang if we admit that there's simply more polarization going on in two-party systems like the US. So yeah, it's a bit ambiguous to stress this point I guess. Particularly with regards to effective polarizations, there is more research on ideological polarization from multi-party systems. Our most stable finding was that social media users are more ideologically polarized and that is something that has been shown for different party systems, but this is in most studies not a cause of finding but a correlation of finding. So it's unclear where the causality is. In terms of effective polarization, our impression was that it's not unequivocally, there is not one clear direction of effects in the studies in terms of polarization, but there are studies that establish a causal relationship between social media use and effective polarizations and also studies that fail to do so. So we do not see clear evidence here. Also no clear evidence for recommender system effects. So the filter bubble hypothesis is far from being supported at the moment, but at the same time I have to say there is simply not enough substantial empirical evidence to draw a conclusion whether that is actually an effect of lack of evidence or a substantial effect. Now for the whole filter bubble debate I see two lines of or two major tasks that we have to engage in from that point. The first one is analyze algorithm and other social media effects on polarization competitively. So my notion always is that the algorithm is not all that social media is about, but there is a lot of user behavior going on on social media. Users are tailoring their information environments by themselves and not only the algorithm does that, so there are other things going on in social media use that are not really connected to the algorithm and we have to look at them competitively. And the other thing is that I think we need to analytically isolate algorithm effects that is really not look at social media use in general as a predictor of polarization but really try to figure out what is the algorithm doing to users and explore maybe also different ways of how an algorithm could work and select content for users in more depth. Now I would like to, and this has a little bit of a workshop character because these are two studies that have not been published yet just as the lit review. I would like to present you some preliminary results of studies that I did going in those two directions. The first one was on the competitive assessment of social media effects of different patterns of users behavior and for that, oh no, first what have been my conceptual ideas about that. So that is still very coarse but I think it's more fine-grained than what much research offers us for ideas on how social media might affect polarization. The first thing is the infamous filter bubble hypothesis, so effects of algorithmic content selection that lead to content environments that are really in line with people's political attitudes. Then there is a second effect and that context of passive content exposure via social media that could occur and that is selective exposure. So users are choosing, are actively selecting which accounts to follow, which friends they have and that might also contribute to the like-minded information environment. In some parts of the literature this is referred to as the echo chamber hypothesis. But then there is also active social media use and I think that might be very important for psychologists particularly because also active user behavior on social media might contribute to the shaping of polarization and two ideas I've got there, maybe there are more than that, is social media self-effects. So the argument that if you're posting about something, that you're giving some political content alike, that because of consistency of our strife for internal consistency might increase us or might strengthen us in our own beliefs and that is a hypothesis of self-radicalization through behavior and maybe reflection of that behavior. And then there's also feedback effects that might occur, so that is the social interaction within social media platforms that you see. If you're posting radical content you get more likes and that might increase your own political convictions in terms of polarization. Now the first empirical study that I did try to more or less disentangle these different mechanisms even though I have to admit that that was not possible perfectly with the data at hand. What I did was a secondary analysis of the German longitudinal election studies from the last Bundestagswahl, so we just had an election in Germany a couple of weeks, no, last week. Last week this one was the data from the 2017 election and I looked at effective polarization only here. So how did evaluation of the own party camp develop over the course of the campaign? There were I think seven panel waves that, no, six panel waves, waves two to seven that went into this analysis and it was possible to reconstruct how partisans, so people who liked a different or one specific party a lot, at the beginning of the campaign, how their polarization developed over time, over the course of the campaign. And there were different social media variables included in this data set as well that enabled me to disentangle effects of passive exposure to political posts, of following party-owned accounts, of the frequency of sharing political posts, so active user behavior and of commenting, on the development of effective polarization over time. And what I found was maybe in with regards to time, I would skip the regression results here and come to my summary in words. What I found was that first party evaluations were mostly stable over the course of the campaign, so this again tells us something about effective polarization in Germany or maybe in multiparty systems more generally, that this might not even be the most pressing problem in this kind of system. And I think the current election results might argue in that direction as well. That passive exposure to political posts tended to stabilize party evaluation, particularly so among partisans of right-wing populist AFD and neoliberal FTP, so there were the strongest social media effects. And this, in my reasoning, suggests that there were no algorithm-cost polarization effects at work because we were able to look at selective exposure and active social media use variables distinctively, so I think the variance that remains to be explained by the general amount of exposure to political content via social media might be the ones or might be closer to the actual variance that is being caused by the algorithm. So this could be interpreted in this direction even though it's still only an approximation to algorithm effects. For the following of party-owned accounts, which is selective exposure, and also for active commenting, we saw an increase of effective polarization, again, only among AFD and FTP partisans. For the other parties, partisans, there were literally no effects at all. So this suggests selective exposure and social media self-effects or social impact effects being at work here. Now, for the other direction that I argued we should look into, this is even more at the initial stage. This is an experiment that has just been in the field where we looked at the way how different types of news recommendation systems, which are actually running, influence polarization at the user level. So this is data that I just analyzed this week. This is really preliminary. It might be that the results look different after maybe including additional covariates or so. So what we do here is, this is a large-scale project with colleagues from KIT Karlsruhe. What we do here is that we use different types of news recommendation systems, content-based recommendation systems. So in the first place, recommenders that only look at which texts are similar to each other, and we want to include user variables in the next steps. So there will be additional experiments where we also look at demographic recommendation as the data science colleagues call it, and see how these might affect polarization in comparison to random recommendations. So we have a body of articles, about almost 4,000 articles on migration, and we let the users in four iterations choose from a selection of articles, one article that they are supposed to read, and this is being repeated four times. And either these articles, and this is how the surface of the front end looks like for choosing these articles, and these articles are either randomly picked or they are recommended by a recommender system, and that is our approximation to the algorithm. This does not look like Facebook, I know. But at least content-based recommendation is at work here, and there are not really many studies that actually do that. Now very quickly to our results, because I'm running out of time. In terms of effective polarization, we find that using any kind of recommender system led to less effective polarization as compared to reading randomly selected articles. And that suggests at least for content customization, there might even be depolarization effects. Because the algorithm selects articles for me that are consistent with my convictions, I might be less polarized as compared to the case where I'm confronted with articles that are simply randomly selected and therefore might contradict my own opinion stronger. So there might even be positive news here. But we also see that recommender systems that have a negative sentiment bias. So that is of course closer to what Facebook and Twitter do because they consider the emotionality of posts because these posts receive more interactions, the ones that are more emotionally charged. So systems with negative sentiment bias increase effective polarization. But there is an interaction effect at work here. This only occurs if people spend more time on the content that they are confronted with, not if they have been simply clicking through the articles that we confronted them with. In terms of ideological polarization, we hardly find any effects in this particular study. The only assumption was that a recommender system with a balanced sentiment tended to increase ideological polarization. That might be similar case as above if I'm confronted with content that is not completely in line with my convictions, then this might lead to polarization. Now some concluding remarks my last slide. My impression is, and this is really subjective and personal, that the field suffers from conceptual heterogeneity and empirical inclusiveness or large empirical gaps as you want to put it. I believe there are important gaps if I think of the filter bubble question and whether it has any effects. In particular with regards to the isolation of algorithm cost polarization effects because that argument is really strongly emphasized in public debate. But I think we are still very much picking in the dark here. And also gaps concerning the effects of selective exposure in active social media use, which is a bit in the background of the current debate, I think. My impression is that in the bigger picture algorithmic content selection rather seems to stabilize or depolarize partisans if you isolate only the effect of algorithmic content selection and that observable social media effects might, on polarization, might not mainly be caused by the algorithms and the platform architectures, but rather by user's behavior on the platforms and with the platforms and that, of course, also includes the leaders, as you called them, Steven. So the users that have a lot of followers and that are large-scale content producers, of course. But I admit, as I said a couple of times already, there is not enough empirical research to really be sure about those conclusions. So this is only my first impression. Thanks a lot for listening. I hope that I took a bit too much time.
One of the most heavily discussed questions in the context of belief polarization in recent times, is the contribution of social media technologies. The infamous “filter bubble” hypothesis suggests that that social media algorithms are, by design, driving ideological as well as affective polarization in users. Even though empirical evidence on this mechanism is mixed, at best, this assumption has become a fixture of any debate on polarization. Drawing from a secondary analyses of large-scale panel data and a systematic literature review of empirical research on social media’s polarization effects (both work in progress), I will argue that the question whether social media contribute to polarization has to be answered much more nuancedly. Findings of the two studies indicate that we are far from having consistent evidence on polarizing effects of algorithmic content selection. Rather, social media related polarization effects seem to result from users’ own content selection choices and their active participation in social media communications (i.e., posting, sharing, liking, etc.).
10.5446/54797 (DOI)
Dear audience, welcome to my presentation on financial management in higher education institutions. I would like to present some experiences and four basic lessons learned, which are really important for universities and faculties and for their financial management. So let's start and have the basic lesson learned number one. Number one is funding sources for universities and faculties become more diverse. Therefore institutions need a financial strategy and of course also financial leadership to deal with it. So let me tell you a bit about the financial strategy that is necessary. First of all, the assumption is the diversity of funding sources is increasing. Traditionally, the two major sources to finance a university is in the case of a public university state funding and in the case of a private university also public, but for privates even more important the tuition fee revenue. So these are the two main sources. But we found during the last years the reliability of these two sources is limited. If you depend really on one of them, then your risk spreading is very low. Example tuition fees, the COVID-19 crisis showed how difficult it is for universities which really have to rely on their tuition fee revenue if suddenly the international students don't come anymore. So you depend on it and you don't have it. So you have a high risk out of being dependent on these two sources. On the state side, on the side of public funding, we have shifts from basic funding to program funding. So even the state itself doesn't produce one source of funding, it produces many different sources, competitive programs, excellence initiatives and all kinds of things. And of course what is also important, universities benefit from more autonomy. So they have new chances to get into contact with sponsors and people who give them money. So that means, all that means the variety of financial sources and new financial sources increases and it has to increase. So if the diversity of funding is increasing, so which are the new sources that are extremely relevant? Maybe not the complete list, but the most important are here. It's of course the external research income and you can get it from different donors from the state, from private institutions, from foundations, from international organizations like the European Union or the World Bank or whatever. So it's external research income. Then universities produce revenues out of technology and knowledge transfer. They generate patents which they can benefit from. They have own revenues for instance in form of rents for buildings, vocational training offers bookshops or printing companies or sometimes even television stations. So there are different ways how to generate own income. Universities are often owners of companies in Southeast Asia and many countries. This contributes largely to the income of universities to own your own company. For instance, a pharmaceutical company that comes out of a faculty which deals with that kind of topic. So entrepreneurial activities. And last but not least, there is fundraising and sponsoring. So many options and many chances. If you take a strategic approach, I think you have to plan as a university and a faculty your funding portfolio. And let me give you an example for that. Assume you're a university. On the one hand, you look at the financial market growth. So that means you look at a certain source of funding and you ask, how is this source developing? Is the source getting larger or is it shrinking? So money from private companies, are they in a crisis? So there is a low growth. Are they doing well? Then there is high growth of the market for potential funding. We are talking about funding sources. And there is also the market share, which means how successful are you so far as a university? How well can you exploit that financial source? And let's assume there is a public excellence program. There is public money for the most research excellent university. This program is new, so the market is growing. There is a large sum of money that you can get, but your market share might be low because you didn't do much so far in terms of research excellence. So what could be your strategy? Your strategy could be move that one to the right because the public excellence program, you might be successful and then you might increase your market share. So use the chance or you will find out we have no chance, then don't do anything. So get rid of this. You cannot invest if you know from the beginning that you will not succeed to get money out of this program. So either invest or leave it. That's the norm strategy for that part of the diagram. If you are more to the right, maybe you have transfer activities to a booming industry. What would you do here? You exploit this financial source. You do as much as possible. You just take the money. So you do a lot of projects to generate money. So this is your star. So you're really successful here. If you are more at the bottom of the diagram, for instance, you run training courses, you get money for training courses that you run for a declining industry. The question is, so your market share is high. You're good at this, but there is no market growth anymore because the industry will disappear. So what are you going to do? You give it up. You might decide this is not worthwhile anymore. The industry is dying. We stop the training program. So you say we do something similar for a different industry. So we use our knowledge to create a new market. So you will relaunch this activity and get new funding out of it. So my argument is you can use that kind of portfolio analysis to analyze your financial portfolio, your portfolio of income streams. And you could decide where are we going to do more in order to generate money. So a strategic approach. If you are here, well, you might have revenue from renting a lab space, but the market growth is low and the market share is low, then leave it. So this is the thing that you should not do. And this is not worthwhile doing it. So strategic approach. There are further issues. You need to link the source of funding with the university profile, lifelong learning university, entrepreneurial faculty. If this is your profile, of course, this determines which financial sources you can use. If you are a lifelong university, you generate money from lifelong learning. That's clear. The question of the limits. Are we maybe drifting too far away from core missions? I have visited universities in Indonesia, for instance, where we found there are really, really a lot of entrepreneurial activities. They run IT companies and everything. But at a certain point at a university, I think you have to ask, are we going too far away from our core missions in teaching and research? Are we doing too much just to generate money? Are we like companies in the end? Or should we at a certain point see a limit of entrepreneurial activities? I think that's an open question where this limit is. But I think you have to think about it. Risk management. Yeah, you have to assess if you have a certain financial source. You have to assess how probable is it that you will have it in the long run and what would be the potential loss if you lose it. And so if you have many sources, you can spread the risk. I think that's a good strategy to mitigate risks, to spread it. Sustainability. So sources of funding for infrastructure and buildings as an important aspect. Cash flow management and the management capacities that you need for financial strategies, marketing, accounting, industry relations, and also proposal writing. By the way, proposal writing, many universities invest a lot to teach all the academics how to write a proposal. Some universities in my country are now investing in offices for proposal writing, which give a service to the academics and are extremely professional in writing proposals. So ask yourself, what is our strategy here? Should everyone learn how to write a proposal or should people collaborate with professional proposal writers? Just an idea. So that was my first point, financial strategy. Second important point, the traditional input-oriented line item funding is problematic. Universities need more performance-based funding. So the PBF, performance-based funding, as an important element of allocating money in universities. Let me explain. And let's start with the traditional funding. So what are the features of traditional funding and what are the problems related to these features? Traditional funding is line item budgeting. You have these line items for staff, for professors for traveling, for machines, for other office products. So what is controlled financially is the input. You say you have to spend so much for staff. This is what is being controlled. This is totally inflexible. You cannot link goals and money. You create a very inflexible allocation if you fix the line item budget. Budgets traditionally are yearly. So you plan and you decide on the budget for one year, which means, and many of you might know that, there is the famous December fever. What is a December fever? If the budgetary year is the calendar year, if you find out in December, oh, we have some money left, what are you going to do? You are going to spend it like hell because if the money is left over, if you are a public university, your government would say, hey, there is money left over, you have too much money. So please give it back to us. You cannot keep it. And what are you doing? You just spend it. No matter if it makes sense or not. You cannot plan with reserves. It might make a lot of sense to build up a reserve to be prepared for a future challenge, but you cannot do it because the budget is always yearly. So what you are not going to spend, you have to give it back to your government in case of a public university. So December fever, inefficient. The yearly budget also leads to instability. You don't have a reliable calculation base over several years. You are always going for one year and if the government decides, oh, I am sorry, we have 10% less, then next year you have 10% less. So you cannot calculate in the long run. More problems. Incremental budgeting. What does that mean? The budget of the year 2020 mainly depends on the year 2019. Because people look at last year's budget, they say, okay, this is what we had. Now we want it again. Or maybe we want a 5% increase or there is a 5% decrease. So there is no real reallocation. It's just perpetuated incrementally. What does it mean? There are no incentives. You always get the same. There is no performance orientation. If the allocation decisions are made centrally on the top level, assume the president of a university is making the allocations to staff, to traveling, to whatever, so the line item budget, then everything is determined ex ante before. And these central decisions suffer from low information and from inflexibility. If the president or the vice chancellor determines in the beginning how much you can spend, you are not flexible in the end over the year to adjust your funding. So all this is determined already by the line items. Inflexibility. And usually traditionally you aggregate bottom up financial plans. So you ask the department for a financial plan. The departmental plans are aggregated to the faculty and the faculties are aggregated to the university as a whole. If you do it that way, how can you build priorities? Because everyone wants to have the same again. So this is linked to the incremental budgeting. So no real strategic priority setting. So you see, we need change here. So this traditional funding has severe problems. It's inefficient. It creates December fear. It's inflexible. It's not performance oriented. So what would be an alternative? The alternative would be performance based funding. This offers an alternative. And let me give you briefly the storyline of this new funding approach and assume we're talking about internal allocation inside the university. The steering now focuses on objectives. University follows a strategy, explicit strategy. The strategy works if it is supported by performance based funding, PPF. And what is performance? Of course, this is determined by the strategic goal. Good performance means the realization of the strategic goals. So this is related. This means that funding has to be directly related to the objectives. Incentives to reach them have to be created. So performance is rewarded. The strategy gives orientation to the funding. And the funding makes the strategy work. And there's a preference for steering with incentives and competition and not with rules. This is not regulation. This is creating incentives. This is creating competition. So financial decisions are made autonomously by the faculty. There's a lump sum budget. They can build reserves and everything. And there are incentives to do the right thing, to follow the strategy. And the incentives define the consequences. So these central decisions are better informed and more flexible. So you should leave it to the faculty to decide on what the money is spent for. So that's the basic idea of PPF, which directly leads to my third out of four lesson learned. And this is how can we integrate this performance-based funding into an allocation model? And there is a famous concept, which is called the three pillar model. So this model creates a balance between different orientations of a funding model. So let me explain that briefly. The combination of basic funding and performance-based funding leads to three pillars of funding that you can find in a model. First is there is a basic task-oriented funding. Quite often this is 80 to 90% of the allocation. The idea is the purpose is here, we want something stable. We want a stable basic funding covering the costs, allowing you to fulfill your task. So this is coming as a lump sum. Or you calculate it on the number of staff members or study places. So this is a very stable component in your funding. So if you distribute money from the university level to the faculty level with a stable task-oriented basic funding, you get a certain sum every year. So this is incremental. This is still incremental. But in the three pillar model, there are two more pillars. There is the performance-based funding. This is to influence behavior with rewards and sanctions. And this is exposed incentives for performance. So you have a formula, you have formula funding. You measure with indicators last year's performance. And if you were good, high numbers of graduates, high publication output, whatever, you are rewarded by a financial algorithm. So that's the formula. And performance is rewarded. And the third is, and this is also part of the performance-based funding, it's the innovation-oriented funding. So this is the programmatic part. You finance innovation projects in advance. In the end, you control the result of the innovation. So you induce a competition. So for instance, you have a competitive fund. The president of a university says, here I put aside 1 million euros. This is our competitive fund. And now you faculties could make innovation proposals, what you want to change, what you want to do. And you can get pre-funding. You can get support from it. But still I want from you target agreements. I want you to tell me what are the performance indicators that we want to achieve in four or five years' time. And then you get a reward to follow your objective. But in the end, we will check if you really did. And a good funding model, that's my idea. A good funding model makes a combination of the three components. For instance, it puts 80% into basic funding. It puts 15% into performance-based fund. And it puts 5% into innovation or other percentages. But if you combine it, you have certain advantages. Because with the three-pillar model, you could turn an objective into funding and into incentives. You could relate performance and funding. Example, think about a university that wants to promote internationalization. So that's the objective. You see my three pillars. So how can you use the three pillars technically to promote internationalization? In the basic funding, you could say, maybe if I calculate basic funding by student numbers, if a student comes from another country, he or she gets a higher weight in the formula. So basic funding would increase if you have more international students. In the performance part, you could say, we measure as a KPI performance indicator, we measure the incoming and outgoing students. So we reward universities, we reward faculties inside a university which are very successful in that. Or you have an international research fund, which you just spent for international orientation. Innovation-oriented. Yeah, you make target agreements about internationalization. You create a competitive fund for internationalization projects. So you induce competition to follow the objectives for the future. You haven't achieved it yet. But you say to your president, again, you say as a faculty, I want to be more international. I have a good idea. A president gives me money out of your competitive fund. And we promise you for the future, we will have 10% more foreigners in our study programs. In the performance-oriented part, you promise, no, you don't promise, but you already had 10% more. So you will be rewarded for the past performance. In the innovation part, it's about the future performance. But both is performance-based funding. So you have many design options. And you can, of course, start pragmatics. So the three-pillar model is applicable on all levels of internal funding. So between university and faculties, between faculties and departments, between departments and individuals, you can implement it with a big bang. Yeah, you can say, now, completely new funding model, here's the three pillars. But you could also follow a very careful step-by-step approach. For instance, if you want to implement a funding formula with performance indicators, then you can do it stepwise. You can say, I start with 2% of the budget distributed by a formula, but I go to five in one year time and to 10 in five years. So you can stepwise enlarge the performance-based part. And you could start with pragmatic and small components. You could just create a fund to kick off new research projects. You could top up external research revenue by a kind of premium. You're saying, if you get one euro research income from outside, we put 50 cents on top from our internal distribution to create an incentive. So that's my recommendation. If performance-based funding in your university doesn't exist so far, take small steps, start pragmatic. You don't need a revolution of the whole funding model, but I think every university can move to some extent into the performance-based orientation. The balance that is created is between stability and competition. It's between expository words for past performance and pre-funding for future performance. You saw these two pillars. The performance-based funding could be based on indicators and could be based on peer review. If you have a competitive fund, you could let peers decide who gets it. If you have a formula, you let the indicators decide. And the two things create a balance or a balance between different instruments like formula and target agreements. So that was my lesson learned number three. Number four is a very short one. It could be a long one, but I kept it short here. There are a lot of instruments of higher education management to be applied for the funding for the finance of universities and faculties. Professional knowledge is available and should be used. So there is a lot of instrumental knowledge out there how we can apply funding instruments to university. There are a lot of design options and principles for funding formula, for a target agreement, for a competitive fund. There are already established tools for risk management, how to analyze and deal with risks. There are financial reporting tools to support your decision making. There is cost accounting for universities. It has to be different from cost accounting in private industry. In private industry, it's about the profit. In higher education, it's about the performance in teaching and learning and research and third mission. So cost accounting means something different. There are concepts to assign financial competences between the levels of universities. And there is much more. I just wanted to mention that here. And I just wanted to bring that basic principle that you can use the knowledge from business management, but you could adapt the whole thing to the culture and the context of university. Cultural customization is an important catch word here. You really have to adapt to the academic culture. And all these things are a lot of material for another presentation, but that's not my task here today. So I just wanted to mention that. So bringing the four lessons learned together, they showed how the financial university management could be successful. Number one was developer funding strategy, strategic approach. Number two was apply performance based funding. Number three, create a balanced three pillar model of resource allocation. Number four, professionalize your financial management with specific tools for the higher education context and customize to the culture. That's an important aspect. Thank you very much for your patience and for your attention. I hope these four lessons have been helpful. Thank you.
This video addresses some fundamental developments in higher education financial management. For example that diversification of universities' financial resources is increasing. Financing strategies of universities must take these developments into account and be prepared for new challenges in financial management. Traditional financing models are reaching their limits. To illustrate this, various problems of traditional financing models are highlighted and an alternative is proposed in form of the "performance-based management". Using the 3-pillar model as an example, it is shown how performance-based management can be incorporated into university financing strategies and what advantages this offers.
10.5446/54795 (DOI)
What I want to talk about today is social media and the effects on polarization. And I'm going to depart from two assumptions. The first one is that democracy is currently in retreat worldwide. This is a map by the VDEM Institute for Democracy in Sweden that looked at the trends between 2009 and 2019 and discovered that when you look around the globe, there is a number of countries where democracy is in decline. Those are the orange countries, including some in Europe and, concerningly, including the United States. And there are fewer countries in which democracy has been increasing. These are the same data yet again shown at the level of individual countries and comparing 2009 to 2019 and any point that is below the diagonal is representing a decline in democratic health during that 10-year period. And if you look around that space below the diagonal, then you find a lot of countries that are close to us here in Germany or that are very important to us, like the United States. Now, there are some other countries where things have improved over time, but by and large in key countries for us in the Western world, I would argue that there has been a decline in the health of democracy. Equally, there has been increasing polarization, again in certain key countries. These are the data for the United States in a recently published report by Voxel at all, and you can see how affective polarization has been increasing over the last 30 or 40 years. Effective polarization is the difference in feelings of warmth towards people from your own party versus the other party. And on a scale from 0 to 100, the difference between the parties is now around 45, which is a lot. I mean, that means if you're a Democrat, you really don't like Republicans and vice versa. But there are other data suggesting that political polarization, affective polarization in the United States, is greater for politics than it is for race. People are more divided now in America along party lines than they are along race. Now an interesting aspect of this polarization is that at least at the level of political leadership, it is not symmetrical. What I'm showing you here are data by Hare and Poole that was published some time ago, where they did a very nice statistical analysis of the relative positions of the two parties in the United States over time. In fact, gone back to 1879. So we're talking about 120 years of history here. And what you can see is that since about the 1970s, the Republicans have become more extreme or have migrated further away from the center than the Democrats have who are relatively stable. So I think it's important to keep that in mind that just because there's polarization doesn't necessarily mean it is symmetrical and that people walk away from the center equally. It could also be that one side is moving more than the other. Now I have to add an important qualification to this, which is that polarization is not increasing everywhere in the same way that democracy is not failing everywhere. For example, these are the same data from Boxell et al. in Germany, affective polarization according to these data has been decreasing. The same is true for Sweden, Norway, it's flat in Australia, unlike Australia, Canada, and Switzerland where it has been, I think I said Australia, but it's obviously the US, Canada, and Switzerland where it's been increasing. So the pattern is heterogeneous. But I think that if we focus on countries such as the United States that are very important to the world, then it is very clear that democracy is in retreat and polarization is increasing. And in Europe, within the European Union, there's at least one country that by most accounts is no longer considered to be a democracy. That's Hungary. And yet it remains a member of the European Union. So those are my departure points for what I want to talk about. So are social media to blame for this? Well, a lot of people say yes, some other people say no, there's a lot of debate about it. And to me, the important thing to start out with is to acknowledge that there really is no binary answer for this. It's not a yes or no. Instead, we have to break the problem down. We have to say, well, what is it that social media might be responsible for? And how? And how would we even know that it is social media? So we can achieve partial answers to this, I think, but not an overall, I don't want to come to an overall conclusion that says yes or no today. What I want to do instead is I want to focus on these three issues. Agenda setting power, micro-targeting, and then the whole notion of establishing causality in all the research we do and how we might do that. So here we go. First issue, political agenda setting. What do I mean by that? Well, the conventional wisdom, and by that I mean 20 or 30 years of research in political science. The conventional wisdom is that the media are the principal agent of agenda setting and politics. It's not actually the politicians, according to the conventional view. It is mainly the media which collectively set the political agenda. And yes, the politicians can then influence that, but typically they themselves are insufficient to set the agenda. And just to illustrate why we know this, here's a couple of examples. One causal effect was shown in an experiment published in Science a few years ago where they actually, you know, the experimenters randomly launched topics in local media in the United States and then observed on Twitter what happened. And guess what? Those topics were picked up by people in general suggesting that the political agenda was set by the media. Likewise, the New York Times coverage of terrorism leads to more terrorist attacks. You can show that. And it is a very elegant study using an instrumental variable paradigm. But in Trump and Twitter. And as I would argue, everything is now different and the conventional wisdom is no longer supported. Now I don't know if you remember this. This is five years ago. So that's in the Jurassic period before Trump assumed office, but after he had been elected and no one remembers this now, in the midst of all the other stuff that's been going on. That Donald Trump got very exercised over the cast of a Broadway play, Hamilton, which after a performance pleaded with Vice President-elect Pence, who happened to be in the audience, for a diverse and democratic America. I mean, I guess they knew what was coming. So they said, you know, let's preserve American democracy. And Donald Trump got very exercised over this, very excited, and went on at considerable length on Twitter to attack, really, that performance and the actors. Now if you look at Google Trends for that time period, for the key words Trump, Hamilton, you find that there was a massive spike in interest, in public interest, around the time that he tweeted. Now Google Trends data tell us what people are searching for. So this is telling us that people were searching for Trump and Hamilton. The absolute numbers are not available, but the maximum is always expressed as 100 percent and everything else is scaled relative to that. Now why does this matter? Well, it matters because at the same time, on the same day that Trump engaged in this Twitter activity, he settled a lawsuit against him over the so-called Trump University for $25 million, including a $1 million penalty to the state of New York. So basically admitting culpability in this fraud lawsuit. And the blue line shows public interest in the Trump University settlement. It's about, I don't know, 5 percent of the interest in Trump Hamilton. So is this coincidence? Did he just tweet something to distract people from news he didn't like? Well, we don't know. That my collaborators and I wanted to examine this possibility more formally. Now to do this, you have to postulate some sort of conceptual model. And this is what that model is. Our presumption was, well, if Donald Trump is diverting the media, then whenever the media covers something that he doesn't like, he's going to start tweeting about something different. And if that diversion is successful, then maybe the media drops this harmful issue or at least reduces it. So basically it's a trivial conceptual model which you can implement in a regression equation and we expect one positive and one negative coefficient. And we operationalize this using an event in ancient history known as the Mueller investigation involving potential collusion between Donald Trump's campaign and the Russian government. So we postulate it furthermore that whenever, and this clearly, by the way, was damaging to Trump and he knew that. He hated it. It was damaging to Trump. So I think no one can say that we picked the wrong negative event for Donald Trump. We then went to his record and his campaign literature and everything else and we examined what we considered or tried to discover what his political strengths were. And at the time, pre-pandemic, it was certainly jobs, the economy. He was going on about this all the time. It was China. He was always an antagonist of China. North Korea, you may remember that Donald Trump has a bigger button than the other guy when it comes to nuclear power and, of course, immigration was his core platform. So we expected a lot of activity on those topics that were congenial to Donald Trump's political future. And if that were successful, then maybe the media would drop or reduce coverage of Russia and Mueller. So let me now tell you about a study that we conducted some time ago where we scraped all of the New York Times, all of the ABC News segments, the headlines, and all of Donald Trump's tweets for the first two years of his presidency, which is when the Mueller investigation was taken place. And we then did some very simple, relatively simple statistics that related Mueller-Russia coverage in the media to Donald Trump tweeting about his presumed scientific strengths. And so here's the first set of regression models. Do Trump's tweets diverts? Well, you can't see the numbers because they're too small. Don't worry. I have a magnification here. All I want you to take away from this is that the numbers are positive, and they're as significant as indicated by the asterisks. What does that mean? Well, that means whenever the media increase Russia and Mueller coverage, Donald Trump increases tweets about jobs, immigration, China, et cetera, et cetera. That's what this means statistically. Now does that work? Well, this is the second regression model from our conceptual model, where we're relating yesterday's tweets on those diversionary topics to today's coverage. So what happens after he tweets? What does the New York Times do? What does ABC News do? What do they do together? Well, what they do is less. Here are three significant, small but significant coefficients that are all negative, which indicates that the diversion works. The media are dropping Russia and Mueller in response to Donald Trump's diversions. Now that was a first analysis based on preselected keywords where we presume to pick up Donald Trump's political strengths, but we push this further by examining his entire vocabulary that Donald Trump was tweeting about, excluding things related to the Mueller investigation because that would just contaminate the whole thing. When he's talking about his own investigation, we were not interested in that. We were interested in understanding what he'd be talking about and how much in response to coverage about Russia and Mueller. We also wanted to know what happens if the coverage in the media is neutral, just for comparison purposes. Now I'm going to show you a bunch of graphs that summarizes these data quite strikingly, I think. But I can only do that if I explain how we did the plots, and that'll take me a minute just to walk you through them. So here we have that first regression model I was talking about. How many times does Donald Trump tweet a random pair of words called that X in response to the New York Times or ABC coverage of Russia and Mueller? Well we look at that for each pair of words in his tweets. We estimate that regression coefficient and we plot it on the graph. Well actually we plot the t value so we can indicate significance, but effectively it's just the transformation of the coefficient. And then we look at what the media do the next day to Russia and Mueller in response to that very same pair, pair X. And we estimate another coefficient which we plot on the ordinate and bingo, we have a space in which we can present each pair of words in his tweet. So this is pair X, that's just some pair of words, but of course we also have Y and we have Z and in fact we have a 1400 of these because he tweets about a lot of different things and they come in lots of different pairs. Now we can formulate some expectations about what should happen. Now if nothing happens then we should have a blob of points in the middle. And we should observe that either if there's just no effect or we should observe it for neutral items that Donald Trump doesn't care about. There's stuff in life Donald Trump doesn't care about and I'll show you in a moment what they are. Now if on the other hand something is exciting him then he should tweet more in response to coverage in the media. More means the blob should move to the right. Now if that is successful so the media then stop talking about Russian Mueller then the blob should move to the bottom as well because that means the media are reporting less the next day in response to the tweets. So what we're looking for is a blob in the middle to indicate nothing, something to the right to indicate Donald Trump getting very excited and something to the bottom right if in fact Donald Trump's diversion is successful and the media reduce their coverage. Here are things Donald Trump doesn't care about much. Well he cares about the economy but it doesn't actually affect his tweeting too much. Well he tweets a bit more but the media don't pick it up. All leaves him unfazed. Gardening well if anything it puts him to sleep because he tweets less. There's some points out here on the left. Gardening is not his big thing. Skiing is also not his big thing. Nothing happens. So with neutral terms you get what you'd expect the blob in the middle. And just to illustrate the word clouds down here are the words from those articles in the New York Times on those topics and I put it there so you can confirm that we, you know, we picked the right thing. The skiing stuff really is about skiing. You know Olympics, mountain snow. I mean it's got all the right things in there. What about Russia and Mueller? Well here we go. This is the New York Times reporting on Russia and Mueller. And guess what? Here is this point cloud in the southeast which are word pairs. Donald Trump is tweeting about significantly more in response to Russia and Mueller coverage and the New York Times the next day reduces its coverage significantly in response to those tweets. And here it is for ABC. So the same thing for both ABC and the New York Times we get this reduction in coverage the next day. So to summarize as a caricature what we find is that whenever the New York Times talks about Russia and Mueller using these words over here Donald Trump starts tweeting about that. These are the words from the tweets that are in that southeast corner on the preceding plot. So he talks about Korea, China, job, jobs, tax, you know, etc., Republicans and so on. And in response to that the next day New York Times talks less about Russia and Mueller. Now the effect, second effect is actually quite small so I'm exaggerating what's going on here but that's simply a caricature visualization of the effects that we've observed. So what does that tell us? Well it tells us that Donald Trump while he had access to Twitter was able to set the political agenda and to influence New York Times and ABC media coverage. It appeared that way anyway. Of course we're not claiming causality in any of this. Gotta be super careful here. I'm not saying that, you know, Donald Trump is telling the New York Times what to do and they obey. No, it's just there's a reliable statistical association that is compatible with the idea that Donald Trump is indeed setting the political agenda. So that's one effect of social media that I think is reasonably strong and something worth keeping in mind for future discussion. Now the second thing I want to take up is micro-targeting. Now micro-targeting is the idea that you can address persuasive messages to people online based on certain characteristics. Now one of those characteristics is personality. Even here data from a paper by UU at all in 2015 that showed that if you have access to 300 Facebook likes by a person, you can predict their personality better than their own spouse. By the time you get to 300 you outperform the spouse. If you have 200, actually no, unless this one here, well 100 you do better than other members of the family. And with 10 likes you're already doing better than work colleagues. So Facebook likes, if you have access to that information allows you to know a person's personality and once you know a person's personality you could then perhaps manipulate them better because you understand what their vulnerabilities are. Does this work? What do we know about the effectiveness of micro-targeting? Well I would argue that there's pretty good evidence to suggest that it does work based on this study by Matt at all which included more than 3 million participants. So I don't think power, statistical power is much of an issue in this study. And what they did was to expose their participants on Facebook to cosmetic ads. And these were real ads and they actually sold real stuff on Facebook with those ads. And the ads were designed to appeal to extroverts or introverts. And they had a complicated way of validating that. I'll show you some examples in a moment so you can get an idea of how this was done. And the audience was also selected to be extroverted or introverted. How? Well, by using their likes. We know what likes an introvert has on average and an extrovert. So all we got to do is target and advertise and add two people with that profile of likes, whatever it is. What are the data show? Well first the stimuli. Which one is what? Well, now that I've primed you, you probably also think that the one on the left appeals to extroverts and that one to introverts. And indeed, if you do the validation study then that is what you find. And you sell more stuff if you match the audience to the ad. What I'm showing you here are the conversion rates. This is the click-through rate from each ad. You can express the same data also in pounds and pounds sterling which is the amount they actually sold. You get the same result. And if you match the ad to the audience you sell more. So introverted ads are over here. Introverted audience is green. Well you sell more than if you have an extroverted audience. And the reverse is true for the extroverted ads. If you send it to an extroverted audience you sell more than if you send it to an introverted audience. So targeting works. There's other evidence from other studies done in the laboratory that pretty much supports that. The persuasive messages can be tailored to a person's personality. And of course online if you do that you have, you know, a lot of impact. To the point where we have to examine the relationship between Facebook and democracy. Now I just talked about cosmetic ads and I'm pretty confident about what happens with cosmetic ads. What I don't know is what happens with political messages because to my knowledge no one has done the research and we know way too little about what is actually going on Facebook. But even if we don't know exactly what's going on I think we can say that micro targeted political messages are a serious problem for democracy because if only the target and the originator know of the existence of the message then a political opponent has no opportunity for a battle. Hillary Clinton had no idea what was going on in Facebook and what was being said about her by Russian trolls and God knows who else. Well that is not democracy because democracy relies on a public exchange of ideas, a free marketplace of ideas so that people can make a choice. You can't do that with micro targeted messages. So irrespective of any empirical data on that I think we have a problem for democracy. Those of you who are interested in this sort of fundamental relationship between technology and democracy this report that I was a leader author on for the European Commission about a year ago is available at that web link and that goes into all these issues in great depth and it also contains policy recommendations incidentally about how we might get out of this mess. But that all takes time. The European Union is currently working on a lot of legislative proposals. I know that because I spent two months in Brussels earlier this year working on those initiatives but they will take years to come out. So what do we do in the meantime? Well one thing we can do that colleagues at the MPI in Berlin including Ralph Hirtwig and I worked on, one of the things we can do is to reverse engineer micro targeting by telling people something about themselves that sensitizes them to being targeted. That's the basic idea and I'm pretty sure Ralph is going to talk about that more at length in his keynote tomorrow. So I'm just going to give you a very brief thumbnail sketch of this one study that just came out a few weeks ago where what we did was to get people into the lab so to speak online of course and we gave them a personality test. We ascertained the extroversion and the introversion. Down one condition we did that first and we gave them feedback on their extroversion, introversion. In another condition we administered a relevant personality test and we then asked people to classify ads as being targeted at them or not. So in other words the only task was for people to say, hmm, yeah I think that ad is being, that's me, they're targeting me or no they're not. After we give them feedback on their personality in one condition but not the control. And how do we give them feedback? Well, we gave them a brief description of what an extrovert is and what an introvert is. Now it was a lot longer than just this of course but that gives you a sense of what we were telling them and they had to read this. We made sure they actually spent time reading this explanation. We then provided correct feedback on their introversion, extroversion score. So here's a sort of an extroverted person. They would get a feedback that places them on the 74th percentile and blah blah blah blah was all explained. This is where you are. A highly introverted person, this would probably be a mathematician, they would be given this feedback here saying 98% of people your age are less introverted than you. So that's the feedback. What happened after that? Well, this is the task. Is this ad targeted at you yes or no? That's the only question we asked. Picture the sound by yes, no, target it. Now if you're an extrovert, what are you going to say? Well, yes. If you're an introvert, probably not. Well that's exactly what happened. After people were given personality feedback, that's the experimental condition. The modal accuracy was 100% for people in that condition. And the average was around 90%. Now that is 30 percentage points higher than in the control condition. In the control condition, people are above chance. Chance would be 50%. Here the average is 60 something and it's above chance statistically, but it's 30 percentage points below the boosting that you get in the experimental condition. So it's a massive effect. And we've replicated this and extended it a few times. But the bottom line is that messages are targeted. There's no question about that. And the targeting has been shown to be effective for commercial ads. I don't know of anything for political messages off the top of my head. We also know that people can learn to detect advertisements that are based on their personality, at least for cosmetics. You know, well, that's a start and maybe that is a first step towards resilience. What we're doing now is the obvious next thing, which is to say, well, will people be less persuaded to buy cosmetics once they find out that they're being targeted? Now with cosmetic ads, it's not an ethically complicated issue really. It doesn't mean to my mind, it doesn't matter whether or not people are persuaded to buy one lipstick or another based on micro targeting. Where it is crucial, of course, is for political messages. Because that is where things become ethically far more dubious. Now it's an interesting aside, when you ask people, when you ask the public about micro targeting, it turns out that in Germany, the UK, and the US, people uniformly do not like being targeted on the basis of their personality or other inferred characteristics when it comes to politics. We just published that paper a few weeks ago in Humanities and Social Sciences Communication and that was clear across all different countries. So the public actually doesn't like being targeted for political messages. Now the final question I want to address briefly in all of this is, well, how can we be sure, or how can we make a causal attribution to any of this? Now in an experiment such as the one I just told you about where we had a control condition and an experimental condition, we can make a causal inference because we are randomly assigning participants to one condition or the other. So we have a control condition, experimental condition, and outcome. If there is a difference, we know it is due to our intervention. Now that's sort of causation 101 and it's an oversimplification and it's not the whole story, but as a first approximation, that is correct. And the key thing here is that the randomization disrupts all other variables that might otherwise be responsible for the outcome. Now in observational studies, we don't do that. Well, we can't do it because what we do in observational studies is that we look at naturally occurring variations. So we look at people who have different shoe sizes or different genders or different political opinions and we just sort of look at them and measure that and we relate it to an outcome such as maybe polarization. Well that's okay, but that does not allow us to make inferences of a causal nature because well, lots of reasons. One of them is that the causation could be the other way around that what we think is an outcome is actually causing some of the variation we're observing or perhaps more likely there are these nasty hidden variables out there that might instead cause the outcome and there's absolutely nothing we can do about that. Yeah, or is there? Well I would argue that actually there is because there are some ways in which you can come closer to a causal interpretation and the one I just want to illustrate is what's called an instrumental variables approach. Now what does that mean? Well it's an observational setting so we have naturally occurring variation and we have an outcome that we're observing. And we'd like to know if this causes that without running an experiment. How do we do that? Well we can't run the experiment but maybe what we can do is we can find a so-called instrumental variable that induces exogenous variation that makes this thing vary and in so doing perhaps that's almost like running an experiment. Instead of assigning people to different interventions maybe there's another variable that causes these interventions or makes these interventions happen on our behalf and if this exogenous variation cannot reasonably be related to the outcome directly well then the only causal path is the one that we're trying to establish. Now that's sort of true. I'm giving you a thumbnail sketch here so you know you can always make it more complicated but I think it's a first approximation that's pretty reasonable. Now here's one example of a study that was published last year that did this by Siminov et al. And what they looked at was the effect of Fox News consumption in the United States on compliance with social distancing. So their presumed causal variable was how much people watched Fox and their measure was how much people stayed at home compared to baseline pre-pandemic. And the hypothesis was that the more people watch Fox News the less likely they are to stay home because Fox News tells them that COVID is a hoax effectively. That's the sort of simplified model. Now if you look at the data then without doing anything else that's precisely what you find. You know the more people watch Fox the less likely they are to comply. Well is that causal or is it because the people who don't like complying with anything are choosing to watch Fox? Well we don't know that except what we can do and this is really clever. I love this. What you can do instead is to say ah ha in each of thousands of cable markets in the United States the position of the channels on the menu for your cable TV is quasi-random. So in some places in West Virginia Fox might be number one. You go to Maryland in some counties somewhere some small market it might be number 23. And whether it's one or 23 or 15 or 30 or anything in between is sort of random. I mean it's not entirely, I don't think anybody is drawing random numbers to do the channel assignment but you know there is no systematic relationship of a station to its position on the menu or button on the remote. And this is crucial because the channel position then cannot conceivably directly influence social distancing. I mean how would the menu for your cable TV make you stay at home or not on its own? No it doesn't but what it does do and there is independent evidence for that is that the higher up the channel is the more people watch that station. So on top of people's preference for Fox whatever it may be there is that added boost that Fox gets if they're high up in the menu as opposed to being lower. And that is that little bit of exogenous variation that is added to the naturally occurring variation in people's preferences for Fox. And the moment you do that you can look at those marginal effects that are due to the exogenous variation and you can then look at social distancing. For that added extra bit of exogenous, exogenously caused variation. And when you find that, you find that Fox is causing people not to stay home. The more people watch Fox the less likely they are to comply with social distancing. And this isn't the only study, there's another one that came out at the same time that showed the same thing focusing on two different commentators within Fox who either did or did not consider the pandemic to be a hoax and you find the same effect. So that's my way of background to give you a flavor of how one can do this. Now we have an effort forthcoming that will be submitted in hopefully not too long once I stop going to conferences. And where we did a systematic review of the literature on the effects of social media on outcomes relevant to democracy. Now there's nearly 400 papers on this but only 25 of them use that approach that allowed us to identify causality and we focused mainly on those. Now the conclusion that we draw from the systematic review is that it's a bit nuanced because it differs between where you are in the world but in established Western democracies, so Europe, the United States, social media use causes increased political polarization and also some other adverse effects. And that's true even when users are exposed to opposing views online. So it's not an echo chamber phenomenon. It is simply the use of social media translates causally into increased political polarization. Now this conclusion comes with a caveat which is that the number of existing studies is small and that causal inferences even with instrumental variables and so on are not 100%. So there's an escape hatch built into this but I personally find it at least suggestive. And just to illustrate what other things might happen, there's a recent paper that was done here in Germany where they looked at random outages of Facebook in selected towns in Germany. Now it turns out that in Germany, I guess it's elsewhere, occasionally your internet fails. You may have noticed that. The ISP just falls over for a couple hours or for a day. Facebook may be down for maintenance or something. So now this happens presumably at random. I mean I don't think there is a mastermind, a Facebook outage master who's selectively switching off tubing and every Tuesday afternoon. You know that's not what's happening. It's random, pretty much. Well it turns out that whenever Facebook is down, there are fewer hate crimes against refugees in Germany. It's as simple as that. You turn it off and fewer people get beaten up. That's one of the effects of social media. I'm simplifying a little but not much. That actually is what they showed. So I think my 45 minutes are up. Now my conclusions, what are the claims I'm making? Well I think the first claim is that democracy is in retreat in certain key countries. Polarization is likewise increasing in certain key countries. And in part, that is due to social media through processes such as permitting agenda setting, targeted advertising and indeed just usage of social media for which some causal effects have been established. And we can now discuss the nuances or your disagreements with my conclusions. Thank you for your attention.
Democracy is in retreat or under pressure worldwide. Even in countries with strong democracies, polarization is increasing, and the public sphere is awash in misinformation and conspiracy theories. Many commentators have blamed social media and the lack of platform governance for these unfortunate trends. I review the evidence for these claims and show how the traditional view that mainstream media are instrumental in setting the political agenda has become superseded by politicians’ power to set their own agenda through social media. I review the growing evidence that all forms of media are a causal factor in shaping a variety of political behaviors, from ethnic hate crimes to compliance with social-distancing measures. I examine the implications of this analysis through the tripartite lens of political advertising, free speech, and government regulations.
10.5446/54796 (DOI)
I will present very quickly some of the previous results that we had working with online data and applying a data-driven approach to study social dynamics with online social media. Then in the second part of this presentation, I will focus on some ongoing projects and some recent results on polarization and also on the link between polarization and speech. I would like to start from the digital information space. I work with mainly online data and with data from social media platforms. With the event of the internet and later with social media, we have lots of information that is being produced by users, so by us, when we use all our devices in our digital lives, and also the fact that this information is a lot. We produce a lot of information that, of course, can be precious if we want to run some kind of analysis, but on the other side, the fact that we produce this information and that information is there is also something that we, as users, have to process. The idea that we have lots of information that is going around on the web and that can spread very easily and very quickly. This was a great advantage and revolutionized the way in which we communicate with each other. The fact that a piece of information can spread, so we have this first key point of this space. The fact that information can spread very easily and very quickly and the fact that we have lots of information to process. Also, the fact that this space, the digital information space, changed if we compare this space to the traditional media system. The main difference is that while in the traditional media system, the users, let's say, on the other side are passive, so they do not have a direct, let's say, interaction with the information source. This is not happening on the internet and, of course, it's not happening on social media, where the users play an active role in not only accessing content and selecting the disinformation, but also in producing their own information. This is another key point because we also observed the emergence of a more heterogeneous system of new sources. For sure, this can also be an advantage. It can, for example, foster a pluralism from a certain perspective. But what we also observe is that things are a bit more complicated. What we observed in analyzing this, so what we do, just to quickly describe you what we do is to collect and analyze data from online media platforms. We use methods from computational social science, so we use methods from mathematics, statistics, computer science, to analyze this data. But usually what we do is to answer questions that are questions sometimes from the social sciences. It's a different approach. I would say a complementary approach. It cannot totally substitute the traditional social sciences approach, but we can have sometimes a broader overview of what's happening. Analyzing this data, what we have seen is that the online digital system is characterized by strong polarization. We did lots of studies on different topics from climate change to vaccines, to also science and pseudoscience, and in all cases where the controversial issue is debated, the users polarized. You have users that just got informed on a certain part, certain methods, a category of new sources, and users on the other side of the spectrum. Although we can say that we could have access to lots of information and information sources, what we observe is that in the end, the users tend to confine their attention to well segregated groups of like-minded individuals. What I am referring to are the now very famous eco chambers. The fact that we can observe the emergence of this, we call them clusters in computer science. They are groups of individuals where you have lots of internal connections. It's a very dense region in the network, but the connections between the other groups with the other groups are not so many. The fact that you choose to interact with a certain kind of information means that you are confining your attention to that narrative, to that particular position, for example, on the topic, and you are not interacting and so having, for example, a debate with the other side, with the other position. This can be, let's say, a problem in the sense that it can reduce pluralism, for example, because what we also observed is not only that the debate is very polarized and that these groups of like-minded individuals emerge, but also we observed the selective exposure. The fact that even on more general, so if we analyze all the information, for example, we had this study on more than 360 million users on English sources on a global scale on Facebook. Also in that case, we found these communities, these eco chambers were emerging. What we also observed was that users were, so the number of new sources we interact with decreases with the increase of our activity on the social media platform or with the increase of the time that our range would say on the platform. So this means that, although potentially, we have access to this great amount of information, what's happening in the end is that we are really confining our attention and so we are selecting information from a very small set of new sources. These dynamics are, of course, valid for information in general, but if we just think of misinformation as a particular kind of information with specific characteristics, so for example, that it's not reliable, you can understand that the mechanisms that are behind are the same. So in this direction, it's very important to understand how these dynamics can also influence how misinformation, for example, spreads and can spread and what are the factors that are behind this spreading. What we also observed, for example, we studied the dynamics and the emotional dynamics of the users involved in these communities and what we have seen is that, for example, the negative emotional attitude of users increases with their polarization, but especially that the debate degenerates when you have users from one community to interact and discuss with users from the opposing community. So this is also showing us that we have this very polarized environment, but what we also have is that users from different eco chambers are not only very rarely interacting, but they're also, when they interact, most of the times this interaction is not a civil discussion, a civil debate, but it's a debate that degenerates over time. So this again can reduce the possibility of dialogue and debate, for example, in a democracy. So what are the also the implication of this kind of system is that what we also did was to study how, for example, the debunking content that was created on Facebook to debunk false claims and hoaxes, how this content was received by, for example, pseudo scientific communities. And what we observed was that this content, because of the eco chambers structure, was not really reaching those communities. So we observed that the main consumers of this kind of content are users from the scientific eco chambers, so users that we would expect not, you know, needing this kind of corrections. And so, and this is the first point. So the fact that having this really polarized environment and this structure in eco chambers can also limit sometimes the contrasting approaches that we can implement. On the other side, what we also observed was that in the cases where debunking was able to reach the target, let's say, in those cases, we observed what we can define a backfire effect because these users from the pseudo scientific community reinforced their attachment and their involvement in the pseudo scientific community after being in contact with the debunking correction. So it was not from our data what we were observing. So this kind of approach on social media was not really, was a bit ineffective. And what I would like to say is that if you think about what I was saying before, when we have a system like this where information gets spread very easily and very and very fastly, the idea of debunking every single piece of content that is not reliable is also feasible from a human point of view. And so what we tried to do was to try to understand how, so what kind of approaches, what we can do to try to contrast misinformation from a different point of view. So not acting when misinformation is out there, but try to do something before. And what I would like to present today, so I will first talk about a paper work that we did for the early warning, I will explain in a minute what I mean. And then I'll represent some of the most recent projects I've been working on. So giving you some of, it's an idea of how we can work with this kind of data. So what we observed looking at the data was that when we have a certain topic appearing on official news, we have a sort of 24 hours window in which this topic can appear on fake news. So the idea is that we don't have much time because it's only 24 hour window, but we have some time maybe to make an intervention. And the idea of the intervention was that, so if I'm able, let's say to predict what kind of topics, what kind of entities will become subject, are likely, let's say to become subject of fake news in the short term, what I can do is that I can try to adjust the communication and on these topics. Because what we observed in all our studies was that we have this very strong link between misinformation and polarisation. Lots of the times misinformation, especially disinformation, so when it's created a doc, it's a kind of information that leverages the polarisation dynamics. And so what we did in this study was to understand if looking, not at the content, so not at the, not trying to, you know, define if the content is fake news or not, but try to understand how users are interacting with content. If this information give us some hints to understand what's happening in the short, in the short time, so in the future. So what we did was to, we developed a framework for raising an early warning of potential misinformation targets. So the idea of this framework was to being able to predict what topics are likely to be subject of misinformation in the short term. So the idea, in the image that we have a system like this, for example, in press rooms, what the journalist on the other side knows is that maybe what he can do is that adjust the communication strategy on that particular topic so that he can try to reduce polarisation around that topic and limit the spreading or the potential spreading on misinformation on that topic. So what we did in this, what it is, what we did in this work is pretty technical in the sense that we had the classifier, we developed a classifier to being able to raise this warning. But what I want just to highlight here is the fact that this classifier is just using information from how users are interacting with content, not the content itself. So looking at, for example, the sentiment of the users involved in the discussion, the distance between how the, for example, the content is presented, how the content is perceived, or how the content is perceived between, so in different communities. So all these kinds of features were useful to produce a classifier that was able to make this kind of distinction. So now imagine that we have a system like this. Now the question is what we can do with this kind of system because, okay, I know that I have some topics, for example, that can become, can become object of fake news, okay. What's the communication strategy that I should adopt to try to understand, so to try to reduce that polarization because that was the aim of the, so with the idea of I reduce the polarization and the discussion on the topic is less polarized, maybe more civil. And so what happens is that the topic is not, is not, is not something that can be of interest for the misinformation content. So we did the first, let's say, experiment of this, working with the Lamos School of Economics and Corriere la Siera. Corriere la Siera is the main newspaper that we have in Italy. The idea of this project was to try to understand how the way in which you communicate the news impacted the way in which the users shape the debate around that particular, that particular news. We analyzed in this project data from Facebook and from Twitter and we monitored the reaction of readers to fact-based journalistic outputs. In this project, which was the issue of migration, this was, so this is a project from, that started in 2017. The migration issue was one of the hot topics in Italy, Italy's agenda. And it was a very interesting debate, very polarized debate in our country. So it was a good example to understand how we could try to, to shape the debate, to reduce the polarization around the debate. What was interesting about the migration was also that we didn't have really an emergency in Italy about migration. So the issue was stopping the newspaper's agenda, but we were observing a strong decrease in your rivals in Italy. So, but the topic was very present also on social media. And so it was also interesting to understand this kind of dynamics. So people were really interested in a topic that it was not an emergency. And what was also, of course, it was a topic that was greatly discussed at political level. And what, for example, we observed, and this was very interesting, what you are seeing here on this graph, what we can define at the eco chambers. If you look on the left side of this graph, you have on the circle, on the ring, what you have are all the new sources that we have in Italy. And they have a different color according to the community that belong to. So, and there is an edge, so a connection between two different new sources if they have at least one user in common. So, using this information, we were able to extract these three main communities. And these three main communities can be referred to the main, to we can, the light blue one is what we call the mainstream media. Then we had the right in the right in orange and in blue gray, we have the five star movement, let's say the populism, the populist parties in Italy. What's very interesting is that usually in our works, these kind of eco chambers are pretty stable over time. So they do not really change, okay, if you observe what's happening over time, what we observed here was that in June 2018, we had a need for the formation of a coalition government between the right and the five star movement. And what we observed is that almost instantaneously, these two groups merged in one group. So what you do, what you see in the following months is that from June to August, what we had was the mainstream community and the orange is now representing the right, so the merging between the right and the five star movement. So this is also explaining the fact that these communities really emerge because of the, you know, selection of some information from users. So the fact that now those new sources were together in the coalition government. And so the narrative about migration was pretty similar between the two parties reflected in the way in which also the news were communicating and the users were interacting with this information. But what we did, so this was interesting to analyze the dynamics, but what we did was really to perform an experiment where the journalist had to produce some stories on the topic of migration and they shared these stories on Facebook. And the journalists, of course, were completely free to choose the content of the news based on what was happening. But what we asked the journalist to do was to try to use different content types and techniques when they were communicating the news. So for example, to use infographics or plain text or to use to write an editorial or news reports or human interest stories, so stories where they were telling the story of the people involved in the news. And so for example, why they arrived in Italy, what they had to deal with before coming here and so on. And we trapped all the data from the Facebook page in terms of engagement. We also measured the sentiment of the comments of the users. And we had a group of human annotators to annotate more than 26,000 comments, to assign three labels. The toxic label, so to understand if the comment was aggressive or something that would let the user abandon a conversation. And we used this label as a proxy for the civil discourse, mistrust of Corriere, so as a proxy of trust of the news source. If for example, in the comment, we could have a direct attack to the credibility of the source of information and the position on migration. So if the user was in favor or against the migration phenomenon. And we used all this data to make some analysis and to understand what was working better than other source techniques or types to smooth polarization and facilitate a civil debate among the readers. And what we observed was that, for example, all multimedia pieces and especially pieces that had videos inside were pretty, so were very, received very strong and very supportive engagement with a higher number of likes and shares and were very rarely criticized. We think that this can also depend on the fact that having this kind of content, so for example pictures or videos in some sense makes more difficult for users to go against the source. But on the other side, where all the content types where the journalists were pushing facts to the forefront, so like for example infographics, in this case, we had lots more debate and discussion and lots of pushback. So some sort of backfire aspect that we can also see here and the high criticism of the media. This was something that we were expecting. So the fact that having this kind of approach of a explicit direct sometimes correction against a very sometimes emotional topic like the position on migration couldn't work in the end. And for example, for the techniques, all the news reports that were more, let's say straightforward, impartial, emotional, where the type of news that elicited the most trust, but the human interest stories got a very strong pushback. This was not something that the journalists were expecting because the idea of these stories, as I said before, was really to sometimes also have a sort of more empathic approach. So to try to let people understand what was really happening to these people and what were the reasons why these people were coming to Italy. But on the other side, it can be that these kind of stories were felt as something, you know, the user felt manipulated into taking a certain side because of this more emotional news from the journalists. And in this case, we observed lots of toxic comments, lots of criticism, lots of anti-migration comments. And in general, every time that the journalist provided a very strong opinion of, for example, a policy proposal, like in the case of constructive journalism, where they were proposing a solution, for example, to the problem, there was a lot of criticism of the source. So the career was not trust by the commenters and lots of anti-migration comments. So in general, using this data-driven approach, so like the infographics or giving, you know, numbers and data to support the facts, was not really happening to reduce the polarization of the debate. So of course, what I want to say is that these were interesting insights. Of course, they're related to a specific topic. They're also related to our context in Italy. And of course, there can also be an influence or from the same way in which the algorithms work, for example, in trying to push some kind of content, like, for example, the multimedia content over other kinds of content. But this was interesting also to understand that the way in which you do your communication, so the way in which you decide to communicate the same piece of news, can really change, can really shape how the debate, so the debate on the other side. And so it can also be a way to try to think on how we can contrast, for example, misinformation and polarization, not directly, but changing the approach on the communication side. And speaking on the communication, from the communication side, what I want to mention very quickly is that we also had this European project that was a two-year European project that finished last July, where we analyzed how science is communicating in Europe. We did this for seven countries and for the Twitter, Facebook and YouTube, so three different platforms, and also for three controversial topics. And the results are interesting because they also show differences in how the science communication is communicated, not only in the three platforms, but also across the different countries. And so the fact that, again, when we analyze, for example, social media data, it's also very important to take into account the fact that also how the people and interact with information is also sometimes changing from country to country, where we can, when users display different presences in how they interact with content. If you are interested in this, there is the comprehensive report of this work on available, I have the link here in the slide and on the website of the project. What we did in the project was also to develop some quality indicators for science communication. So what are the aspects that you have to take into account to improve and to ensure that your communication is a quality communication? But what we also did for social media was that we tested a series of recommendations for a period of five months, and then we developed this recommendation for communicating science on social media. This was a pretty interesting, again, experiment because the practices and the recommendations that we developed are data-driven recommendations because we used the data that I was mentioning before to develop these recommendations, together with the quality indicators that were developed through co-design activities and qualitative research. And we put together all this information to develop these recommendations that before being published, were also tested from communication practitioners on social media. And again, all this information and these products are available on the website. So now in the last, the other thing that I want to mention is that we are working also on the infodemic and especially on COVID-19. So for example, we had this work where we analyzed the COVID-19 infodemic on these five platforms and we observed that all these platforms displayed the some, so displayed the really values that can be, so we measured the infodemic in a way similar to how you can measure the epidemic. So we computed the R0 for all these platforms and we observed that the, indeed, these values were hypercritical for all the platforms last year at the beginning of the pandemic. And interesting, what we observed here was also that the way in which misinformation was spreading in these platforms was similar to how reliable information was spreading. So we didn't observe different patterns in how misinformation was spreading around. What we observed was that the ratio of misinformation was higher in platforms where, so not in mainstream platforms, but for example in Gabbo Reddit that are platforms where we have less, let's say, moderation of content. And the last thing that I want to mention, the last project in these 10 minutes I guess I have, is this project on 8-Speech is again a European project which has several goals. So on one side, the detection of 8-Speech in different languages and the identification of triggers and then the recommendation of counter narratives and the proposal of policies to the European communication regulators. We now have, I will now show the first results where we used the classifier for detecting 8-Speech and we did this study trying to understand if there is a link between misinformation and 8-Speech. So the idea is that of course the detection of this kind of, so the automatic detection of this kind of content is complicated and also the contrast is complicated. For example, starting from the same definition, all the platforms have different policies and have different definitions, for example, of what is considered hate speech or what's speech that is not allowed by the platform. So the idea, so what we did in this work was to use a definition of 8-Speech that refers to the whole spectrum of language that can be used in online debates. So our definition includes normal, so from normal to, for example, acceptable, inappropriate or violent speech and violent speech really cover all the forms that spread inside and promote hate and but we also considered, for example, inappropriate or offensive speech. So these kind of speeches are not illegal but they can deteriorate the public debate and they can post radicalization. And we used this data to, so to first to develop the classifier. So we used in this work data from the YouTube platform and the topic in Italian, so Italian from Italian data. The topic was the COVID-19 and we have all this data and we also have channels and videos from that where we can consider reliable and questionable. And what we did was to develop this classification model. So how can you develop this? You take a sample of your comments and you we hired a group of human annotators to annotate these comments across the different labels, so appropriate, inappropriate, offensive, violent. And then when you have this data, you use this data to train the classification, so to train the algorithm so that the algorithm is able to understand, so to evaluate if a certain, so to learn from the human annotations how it can identify comments in the different in the different labels. And of course the model then is evaluated. And when this process is concluded, you have a classifier that works with a certain accuracy. What you do is also that you compare the, so how good your model is against how good the annotators were. So in our case, our model is pretty close to what the annotators, to the agreement between and among the annotators in our, in our class annotation task. And so we can say that we have a very high quality model. Once you have the model, what you do is that you apply the model to the whole data set of comments because of course it's not possible to manually annotate millions of comments. So the idea of using these classificators is that you try to use this, you use the model to classify the rest of the, the rest of the comments. So what we observe is that there is the proportion of the hate speech labels, the forward speech labels in the whole data set are similar for questionable and for reliable channels. So we don't observe any particular differences. And we also measured the comment delay. So the time that elapsed between the posting time of a video, so when a video is posted and the comment that, the first comment that arrived, arrives to the video in hours. And what we see is that there is, so if we look at the different kinds of comments, there is not great difference, but on questionable channels, toxic comments appear first and faster than appropriate ones. And following the decreasing levels of toxicity. So when we are on questionable channels, so channels that we can, where we can have fake news or all, you know, that kind of unreliable information, the toxic comments, so comments that we can define offensive, violent or inappropriate, arrive first and faster than the, than the others. What we also observe, and this is pretty interesting, was that we were not able to observe the presence of what we called the pure haters. So the users that only use a violent language, this was not the case in our data. So when you look at the data, what you see is that we have users that are what we can say, you know, normal users that happen to use a violent, so hate speech in some situations. And indeed, one of the, you know, the future work is devoted to the understanding what kind of triggers, okay, you can have in the, in the, you can have there to try to to foster this kind of behavior. So we have very few users in our data set that only use violent comments, but their activity is pretty low. So they have less than five comments. The other thing that we observed was that, so we computed the, let's say, polarization of these users. So if they were using and getting videos from reliable channel or from questionable channels. And what we observe is that even here we have two peaks, okay, in correspondence of these extreme values of linings. So what we, what we can say is that polarize, more polarized users tend to use a higher proportions of non-appropriate violent toxic comments. But interestingly, the users skewed towards reliable channels, post on average, a higher proportion of non-appropriate comments, then users skewed towards questionable channels. And what's also coming from this data is that these users mainly use inappropriate comments in their opponents community. So again, if you remember what I was saying at the beginning about the, how these, the, you know, interaction dynamics take form and the fact that the debate is degenerating, we are confirming what we were observing and the level of sentiment, even looking at hate speech. And what we observe is also in this case that the toxicity level, so the average of the toxicity values correlates possibly with the length of the discussion. So the more the discussion, so when the discussion increases in terms of comments and in terms of time, the toxicity of this discussion increases. So we have a more toxic debate when the discussion gets longer. So, yeah, this is just a summary of the fact that we didn't find any evidence of relationship between the use of a toxic, toxic language and being involved in this information community. We didn't, we don't have any evidence of pure haters, but what we observe again is that the two phenomena, so the polarization and hate speech are really interconnected and that we can observe a positive correlation between the toxicity and the length of the discussion. Then I am one minute late, so I will stop in just one minute just to say that I'm involved in other projects where, for example, with the Agicom, so the Italian Authority for Communications, I'm in a group of data science to study the phenomenon of disinformation. I also collaborated with the President of Council here in Italy on this topic. And we have now a really nice and interesting collaborating project, which is called Iris, with the Vaccine Confidence Project, the London School of Tropical Medicine in London, University of Cambridge, Sapienza University of Rome, City University of London and Harvard. And we are here studying infodemics and trying to understand how we can promote a more healthy, we can say, information ecosystem. I will stop here and leave some time for questions. Thank you very much.
The COVID-19 pandemic has highlighted the challenge of conveying and communicating complexity and uncertainty to the public, also given the increasing central role of the Internet and social media. Designed to maximise users' presence on the platform and to deliver targeted advertising, social media transformed the information landscape and have rapidly become the main information sources for many of their users. Information spreads faster and farther online, in a flow-through system where users have immediate access to unlimited content. This may facilitate the proliferation of mis- and dis-information, generating chaos, and limiting access to correct information. In this talk, I will provide an overview of how online social dynamics and behavioural patterns can be investigated in a quantitative and interdisciplinary way. Moreover, I will discuss how data-driven insights can be used to design tailored policy recommendations.
10.5446/54495 (DOI)
Okay. Then let's continue with the last presentation until the barbecue starts outside. So my name is Torsten Kuckuck, the senior architect for the enterprise server at Suze. And I want to tell you something about transaction updates. While we are doing this, how this is working. But first, two years ago, there was a nice article in a German magazine, which roughly translated is, before a basketball play started, the scoreboard didn't work. So they decided to reboot the Windows machine responsible for the scoreboard. And what happened is during the reboot, Windows started to apply all updates. And this took quite a long time. And the game started only 17 minutes too late. But they still want the game on the green table later. They lose it. And because they lose it, they were even relegated in a lower division. And it's all due to a Windows update. And they're aside, I will never want to read that my basketball team is relegated to a lower division because of Linux. So and now why I'm talking about this. Distributions with rolling updates. If you look at open to the factory, the mailing list after a really big tumbleweed update release is done. You will always see similar problems. And one question, for example, is how do I make intrusive updates to a system while I'm still using the system? Or how do I handle big updates? If there's a new version of KDE or GNOME and you look at it's released and you look at the mailing list, there are always people having problems because they updated it while GNOME or KDE were running and afterwards or during the process it breaks, update stops, system was in an undefined state, whatever. So that's not nice. And yes, that's something we need to find a solution for. So what should I do if the update breaks my system? Of course, one solution is during boot, I apply the updates in a single user mode more or less and then afterwards reboot a second time with all the changes activated. But on the other side, I as desktop user at home, if I'm in hurry and want to print something, I really don't like that my machine updates, reboots twice and during this time I cannot use it. Especially if you are in hurry. And I use this a lot of times from Windows users. But they are not only desktop system users but also the critical system. When an update couldn't interrupt the services and think about your running as we had with OpenSecureBake, your cluster, your updating Docker and suddenly all containers will be killed and restarted. If you do this in an HPC environment where the processes run several days, weeks, months, your customer will be really angry with you. And at the same time, the update should always fully applied or not. You don't want that there is an undefined state of the system where you have to debug how to fix it, whatever. It should only work. So the answer I found already quite some time before was transaction updates. If you look at other Linux system it's not new. And there is even a definition about this. I more or less copied it from Ubuntu. We have something similar with the snappy format. And in transaction update it should be atomic. So either it's fully applied or not at all. And the update should not influence your running system. Also, if something goes wrong, for example, the new kernel does not detect your hardest video adapter, network card, whatever. You want to have an easy way to go back to the old state which we are working for you. So the first question I asked myself was is there already something you could use for it? There are solutions. One solution is to use several partitions. So having one partition, the current system, a second partition where I install the updated system, then I boot, rollback. Some Linux solutions go that they need 15 partitions for this. Which is really a huge number. And a lot of this space because I assume you have a root system, a file system with about 2 to 4 gigabyte usage. You need 10 gigabyte with free space on it. You need big disks for smaller systems. So I didn't want to use that. The package format is doable, but then I would need to build up a new open-sousa-sousa distribution. But I wanted a solution which I can introduce in small steps into our existing distributions without the need to change everything. Like OS3, where you have a complete new kind of packaging, whatever. It's really nice. But it creates a lot of siblings. And this brought me to idea. Why should I in the user land do all the work which butters can already do for your own? Why should I bother with copy and write everything and emulate this with siblings? There is butters who can do it. So why is it complicated? We have butters as root file system. We have snapfots with rollback enabled. We have snapper. We have zipper. We have butters. We have utilities. That's all you need. You don't need any fancy new stuff. Whatever open-sousa-tumbly read or SLS12 will bring or have everything you need for it. If you think about how snapfots are working with SLS12 or open-sousa-tumbly read, you have your current root file system. And every time you make a modification to it, you create a new read-only snapfot of the current installation and then you update the running system. If something goes wrong, you can go back and do a rollback to your clone. But if something goes wrong, it still influence your running processes, whatever. Why not doing it in another way? So transaction updates, what we are doing. We have a lot of old snapfots, but they are not only there for reference, never used, but they are always used at least once. And the latest snapfot is your clone root file system. If you now want to create an up or make an update, we create a new read-only snapfot in the same way as with traditional tumbleweed or SLS snapfots. But now we don't change the current root file system, but we make the read-only snapfot read-writeable. We run super in it. I use super up here example. You can also run super patch, super dub, whatever you want. After we made that, we expect to read-only snapfot, set it to the new default root file system, and now it's done. More we don't need. The next time you boot, the new snapfot will be booted with all your updates. This advantage is in the current root file system, you don't see the modifications. So if you have a small update like enhancement for Wim, whatever, yes, you have to reboot to activate this update. That's one of the drawbacks of transaction updates, and that's why everybody for himself has to find out what is a better or solution for him. And if you write a small script, that's all you need to do transaction updates on tumbleweed or SLS. In the end, it will be a little bit more complex. You need to check if it worked. If some of the steps worked or not, you need to mount some proc, this file system, the dev file system, but in the end, it's not much more than this one. And there are, of course, some prerequisites for it. If you think about rollbacks, there is a problem if you do a rollback and you have data outside the root file system, it will not be rollbacked and you can have a difference there, I mismatch. So you should strictly separate your data from your applications, else you will lose data with this kind of update. SLEV-SRV is a poor nightmare in this regard. So if you depend on RPMs installed there currently, then I would advise not to do transaction updates until somebody finds a solution to really separate this PHP stuff from the running code from the data, whatever, and not put everything in one directory as it is currently. Also you need to review the RPM prepost install sections. They should modify data which are not on the root file system because most of the time they are not accessible at this time. Don't create something outside of your snapshot of the subvolume because it's not there. Also don't try to fiddle around with running processes because if you think you need to restart itself, if you restart, for example, the web server, the new web server is still the old one, it's not the new one, so don't do that. And you should be able to scope with different data formats during updates and rollback. The good thing is this is nearly no problem. So for the open-suser-tumble-with-buses installation, the RPMs are fine. For the SLEV-12 base installation, the RPMs are fine. So it's really only a few RPMs which do really bad things there. And currently we have system DNTP who have problems, but the next update should fix them because they are making too much assumptions instead of checking if the code is really there. So the next problem there is data consistency. If you look at open-suser-tumble-with-read-wide-root file system, if I start the snapshot and then updating it and make any modifications to the still current root file system, the next reboot there, of course, will be reverted. Now some people always complain the data is lost. Now it is not. It's still there in the old snapfot, you can give it to snapfots and reapply the changes you made, but of course that's not very admin-friendly, user-friendly. So that's, if you listen to the open-suser-cubic call, we decided to go with a read-only file system for open-suser-cubic and CUSP to make sure that this does not happen. If I have a read-only root file system, I can create a snapfot whenever I want and reboot at any time later, nothing will be, nothing can be changed on the root file system, nothing will be reverted there. So, but the problem is with a read-only root file system, we still have a lot of applications which are not able to scope with this. So even the minimal tumbleweed and slash system out of the box, so tumbleweed now, the minimal system should work, slash 12 SP2, the minimal system will not work on a read-only root file system. There are still too many packages which don't check what they are doing, which don't follow the file system standard, but only writing files across the system. One of these applications is system D, I still don't know how to teach system D to not do it in all cases, but luckily system D is error tolerant enough that if it fails, it continues to work and it's not important in this situation. So for configuration changes, ETC needs to be writeable, which also opens up new questions, assume you install Zumba, you make modifications to the configuration files. Now with a transaction update, you update Zumba to a newer version with an incompatible configuration file, what should you do? Some Nisi-Buffins try it with a three-way diff and try to find out what's been changed on the original config and applies to the new configuration. Sometimes it works, if it's a real incompatible change, you have a really broken config file afterwards. Some, and that's what I'm currently doing, is they ignore it, so the admin has to manual do the changes after the next reboot. And here, I really like the system D way, how they are doing it. In user-lib, which is read-only, you have your original configuration file as shipped with a distributor and in ETC, you only write the changes to it. So if the configuration of system D, the default changes with the next update, after reboot, you have the new configuration of system D, but also you have your own changes to it, and system D is merging it for you. There was really something which I like, how they added it, and that is something which you more application should think about, but it's a long way to go. So and where is it already able to use transaction updates? It opens the QBIG or service platform, it's the default and only way to update your system. So it's a rolling release, and it's working there with the read-only root file system. I have some Mephine's now running science half a year with it, and with that system, DNNTP, which we have bugs in the post-installed scripts, which could also hit on Tumbleweed or Sless, but less likely there were no problems. I also have some open to the Tumbleweed installations during transaction updates for every snapshot science half a year. It's there, I variable in the official repository site some months, if you also want to use it there and test it, try it. The only problem on Tumbleweed I run into with a package with an extra Euler is installed, then it breaks because this step is always interactive and transaction updates as currently implemented is non-interactive. That's something we still need to work on. So now I'm already at the end, I ask questions now about this. No questions. Yes? How we handle directories in VAR? If you look at Open to the Cubic or Cust, if you need to write to these directories their own sub-volumes and excluded from the root file system. So they are not available during update. And here we come to don't modify data, most of the time are not even accessible. So for some applications, we change them already that they are doing conversion of data, whatever, with a system that one time first boot script, system service, with the first boot after the update, for others we don't have a roof yet. And some we have still on the list. For example, the CA certificate update package has problems and I already discussed with Ludwig, we will convert them as next, they are able to scope this, but this is something that we need to look at every package which writes into VAR, if the post-install sections are fine or not. Up to today, I know NTP and the CA certificate package, which we already adjusted, or the rest of adjust and else what I'm using did work without problem. Yes? I don't understand you, you can either can hear or the microphone. It's too loud in the background. We have a way that packages can say I need a volume for my package, so we can automate this stuff. Temp file is the... If you need your own sub-volume for this. The ASS script, Mka sub-volume from snapper, which is creating sub-volumes during installation for you. SystemD, for example, is using it. If you need a coding example, look at SystemD. And the better solution is clearly to get the sub-volume into the distribution itself, but if it's not possible, you can also create in the post or pre-installed sections a sub-volume. That's working and already done in Tumbleweed and on Slas. More questions? What about the ugly AAA underscore base package? That's also a bad mix of runtime and data. We try to get rid of this and have a package that only contains a file hierarchy, but no files. The question was about the AAA base. It's two kinds. At first, we have the file system package, which contains the file system hierarchy. So AAA base itself does not really contain core directories. And yes, there are files which could be problematic. I'm not sure. The only real problem we found with AAA base is the password group and shadow file. That's why we create this new system user concept, introduced currently, where we split it up AAA base into SystemD, this user's files and create them if needed, if installed. There is still some work to do done because the root user is still in AAA base. It's the next step because we didn't want to break everything, but working in smaller steps on it. And if the password file is removed, I'm currently not seeing any problems with AAA base anymore, but if it happens, then we need to look at it again. What's your plan on getting this into SLEE? Will this be available in SP3 already? What our plan is is to send extra updates and SLEE. So SLEE has some, at first, the transaction update, RPM, which is in factory, or I think it's in deep two works, it's installable and SLEE. But we will not support it for now because we only are sure that the minimum system works and in my test environment, the standard system works, but if you look at, for example, all of these packages which install in SRV, they will not work. This is nothing we can support in general, but we need to do a lot of more homework before we can go to a general purpose operating system and say, this is something we can support for every customer S3 device. So currently with Casp or the QVIC, we can say the packages contain there are fine, they are reviewed by us and we can support it, but we cannot say it or better, we know that a lot of packages, it's not the core packages, but additional packages of SLEE are not fine and so we have a support problem there. Any other questions? Okay, if there are no questions, then I think you can go out to the barbecue. Thank you.
Applying small updates is normally no problem in a running system. But what about if there is a new major release of your favorite Desktop? Or a major version update of your used Linux distribution? Today’s concepts are most of the time to apply the patches in the running system and risk that a running service or Desktop breaks, or apply them all by booting an installation media and wait for quite some time until you can access your machine again. Or your boot process is stopped for a long time during which the updates are applied. Or some patches fails to apply and your system is in an inconsistent state. A solution for this are transactional updates. Transactional updates are atomic, means either they applied successful, or if an error occurred, you have the same state as before. And if an update does not work, there is an easy way to go back to the last working state. There are different solutions for this, some require new package formats, other require a second partition and you can switch during the next reboot to the other partition. I want to present a third solution: using a standard package manager and leverage btrfs for this. With snapshots and rollback on btrfs, there is already everything available what you need. This talk will give a short introduction into snapshots and rollback with btrfs and show how to combine and use this technologies to your advantage.
10.5446/54496 (DOI)
So, hello. Thanks for coming to this presentation. My name is Jan Krupa and today I'd like to tell you something about collecting data from Internet of Things Networks using SickFox Network. So, first something about me. I'm working for Susie for a couple of years. I'm in the open-build service team, working mainly on Studio Online. But the today's talk won't be related to any of that. It's actually related to something which I'd like to do in my free time that's mostly tinkering with different gadgets. And recently I've been playing with different gadgets which communicate over various of protocols. So, I will give you some info about how I started with it. A few years ago I wanted to do some temperature monitoring in my house. So, that means I have a couple of different temperatures sensors on various places. And I was looking for a way how to do that. So, and actually there was one product which just came to the market. It was a small chip which allowed me to that for quite some low amount of money. It was called ESPA266. It's like $2 gadget which has Wi-Fi connectivity. It allows you to connect some sensors to that and then you can transfer it to via local Wi-Fi connection in your house. It allows some deep sleep so it can be a life on battery for quite some time. And this was working quite good. But then I was starting to think about how I can use this even in the years where there is no Wi-Fi connection that means outside of the house usually or somewhere in the nature. So, the obvious choice for that or something which people would start to be looking at would be GSM, 3G and LTE networks because of the coverage is on most of the places around. But there are some issues with that. The first issue which comes to my mind is problem with power drain. Usually transmitters for those networks require a lot of power. So, you need to have quite huge battery or some power source. The other problem is that it's not that cheap to maintain the sensors connected to each of them connected to the network separately. So, what you usually end up doing is to have some gateway which is connected to the internet via for example 3G or LTE and then there is some local Wi-Fi connection for the sensors or you can use some low power network like NRF modules for Arduino or stuff like that. Recently, there has been some development in this area and there are new networks coming to the market. And recently it's mostly SIGFox network, lower networks and LTE. The first two are already rolled out to most major markets all around the world. That means that the networks are running and you can use them. Today, I'd like to talk more about the SIGFox network because it's very easy to use. The main difference between SIGFox and lower network is that SIGFox network is proprietary but it has low hardware cost. Lower network is open so that means that you can even run your own network if you want. And the LTE networks are not really deployed yet much. This is a table which I found, a table from Nokia. I think it's about one year old so some of the data could be outdated on that. But it summarizes the status and differences between those networks in terms of bandwidth and if they operate in license or a license radio spectrum. So something about SIGFox network. This is definition from Wikipedia. What's important in this is that it operates on a license band. That means that you don't need to pay something for using the radio band. There are two different frequencies. One is for Europe, one is for US. That's because of different radio spectrum allocation in different countries. I mentioned Europe and US. Some countries are using the European one. Some countries the US one even outside those territories. As for the coverage, this is snapshot from SIGFox website taken I think yesterday. So as you can see most of the western and central Europe is already covered. The countries which are not probably will be in the near future I guess. The network itself operates, it's like a centralized network. It's operated by a French company called SIGFox. This company give licenses to different operators in different countries. For example in Germany it's also operated by SIGFox company but for example in Czech Republic it's operated by a simple cell network provider. Now you're probably asking what you can transfer over the network. If you look at this you would be surprised that this is quite low amount of data. So the link message limit is 12 bytes. That means that in one message you send to the network from the device you can transfer just 12 bytes. And you're limited for 140 messages per day. That's like one message every 10 minutes. And the downlink message that means the messages which are transferred from the server to the device are limited for messages per day and they're just 8 bytes. Also these messages needs to be initiated from the device itself because the network doesn't need to know where the device actually is. So the device needs to ask the network if there is some data payload for it. Now what's quite good about these networks is the pricing because if you compare that with using like a GSM network for GSM networks you would usually have to pay like 5 euros a month depending on the country but roughly about that. With the SIGFox network you pay something between half a euro or euro per month and if you get the hardware usually there is one year free subscription. So if you want to play with that like from a developer perspective that's pretty cool because you don't even have to register somewhere and pay some monthly fee you can just think of that. Also what's interesting is that if for example I'm not sure about Germany but for example the Czech operator if you tell them that you are developer they will usually give you free license to use the network if you tell them what are you working on. Another cool feature is that the rooming is already included in the cost that means that if you are running the device in one country you can use it anywhere else and it's free of charge or better say included in the monthly fee. Now the SIGFox hardware that's the part which communicates with the network. So there are a couple of different hardware and new hardware is coming to the market recently. I chose to show you how it looks like so the first one is something which is usually used with Arduino or something with the same interface. The price is pretty low the chip itself costs 4 euros and if you want it with a breakout board you have to pay 24 euros but of course if you build something on top of that you are fine just with the chip. The other hardware is a little bit more pricier but it has much more features actually and this is something which I will be showing the live demo on today. This one is based on the ESP32 chipset that's a successor of the ESP8266 which I was talking about in the beginning. There is a Wi-Fi and Bluetooth on that. It supports IPv6. Hardware wise it's also pretty cool for that kind of device. There is a half of a megabyte of RAM and for megabytes of flash memory you can also extend it with an SD card. As for connections you can connect a lot of things like digital connections as well as analog ones. This is like the schema you probably cannot read it but it's on their website. Now as for the practical stuff how you can use the device. First thing you have to do is to actually register to the SIGFox network. That means that when you receive any device for using in the SIGFox network you need to find out two IDs of the device. The first one is called ID, the other one is called PAC. It's basically two numbers which uniquely identifies the device. For this CP device which I'm talking about you do that this way from Python. After that you open the SIGFox backend website where you choose which manufacturer you are using. Then you choose the country where you are using it in and then you enter those two numbers. As for the registration you just have to enter your name and email address I think. They don't require much details about you. After you do this process everything is set up and you can start using the device. First thing is how you can access the device is to connect to it via USB via the breakout board. It actually works like a virtual serial port over USB. You can instantly have a micropyton terminal over that. The other option is you can connect via Wi-Fi. If you leave it in the default settings it acts as an access point and there is a telnet and FTP server running there. The telnet server does the same thing as the serial port. You can telnet there and you are immediately on the micropyton command line. The FTP server could be used for various things. Mostly if you want to upload some boot up script so if you turn on the device to run some Python commands you can do that via FTP. Now what is also important after all of this is to tell the Sikfox portal what it has to do with your data. Now you can transfer the data from the device to the Sikfox network but the Sikfox portal still doesn't know what you want to do with that. You have to configure something called callbacks and the Sikfox portal can do various things with that. The first probably the easiest thing you can do is to tell it just to send you an email. That means that every time the portal receives some data it will send you an email and you can configure the structure of the email on the callbacks page. The other option is that there could be some HTTP requests sent to your website and that triggers some script on your server or there are some connectors to Amazon web services and other clouds as well. So the options are quite huge. And as for this particular device there is also possibility to for direct connection between the devices. That's not part of the Sikfox network but it's supported by the Pycoms CP device. Okay, so now I'm in the part where I'd like to try the live demo. So I hope that it won't fail. So I have the device here so I will connect the webcam. Okay, so. So this is the device and I'm connecting it to the USB port and you can see that the light is flashing and hopefully it will be connected to the network. And now what I will do is that I will connect via serial port to the device. Could I increase the font size or is it okay? Is it readable? Bigger. I really tried but it doesn't work for this terminal. Okay. So basically I have a code snippet which I will just copy paste to the serial terminal and then I will show you the email client with the message which was received there. So it should be transmitting the data to the Sikfox network. What I'm not sure about this if there will be signal. I actually tried it today on the hotel and not really here. But let's see. Okay. So I have a slight problem with the network connection here but what I will show you is the mail which I tried yesterday. It's the same message which I sent just a different date so this is how it will look like. So this is like the callback configured for email message and you can do the same thing for transmitting it via the HTTP API or different way. We'll go back to the presentation. So this is basically the code snippet and this is how the email message will look like. So that's basically it. What I can recommend if you want to play with this I actually create a blog entry with all of the comments you need to start with using this device so you can find it on my blog on this address and if you have some questions I will be happy to answer them. Okay. So do you know if there's in the specifications if there's a maximum current you can basically connect or if you connect more sensors to the device and the sensors get powered through the device. Is there a maximum current basically? I'm not sure about the current but as for the way how we can power the device it's possible to power it either via 3.3 or 5 volts but as for the current which you can take from the device from the sensors I'm not really sure about that if it's like a complete pass through the device without the device doing something with it or if there are some restrictions so I'm not completely sure about this. So I didn't catch it but you mentioned that the costs for the access to the network is 0.5 to 1 euro per month but is it per device or per account? How is it built exactly? Yeah so I think the pricing is still not completely clear because most of the providers gives like a specific pricing depending on how many connected devices you have so like in some countries I found that they don't even list any prices on the websites of the local providers in some countries they listed somewhere they say it's roughly between the 0.5 euro to 1 euro per connected device like per one device connected to the SIGFox network but since the network is just starting in a lot of countries I think they are mostly checking the market to find out what is the price people are willing to pay I guess but yeah so a lot of the providers doesn't really list specific pricing it should be somewhere around that so I guess they are trying to target the market where something cheaper than like the GSM or things like that so and try to push people to connect directly to devices without using any gateway so it would be affordable to put it everywhere. Okay and one more thing how it's kind of hard to imagine for me how the infrastructure looks like so they are kind of sharing the infrastructure with the mobile phone operators or they have something on? Yeah so this is also different from country to country I can I know that for example in Czech Republic what they do is that they partnered with local like T-Mobile which is the local GSM provider and so they are using some of their base stations to create the network but of course they have to use completely new hardware but they are just using the places where they have the base stations to make the network coverage. The difference between this network and for example LTE or other mobile networks is that it's quite low frequency so it's easier to make the coverage even with less amount of base stations and also the speed is quite low so they need much much less base stations to cover much bigger area so but that also probably decrease the cost a bit. Okay thank you very much. Is there any way to find out if the device is connected to the network I mean from like checking on the website or is there some API? You cannot really find out if the device is connected right now because as far as I know the device doesn't really register to network when you turn it on the device what you can see on the on the on the Sigfox portal is the last location of the device with but it's not really that much specific because it's just made by the triangulation between the base station so it's a tens of kilometers range so that's just rough location but at least you can find out which country it is in and you can also find the the the signal level which the base station was connected to with the with the with the device during the time when the last message was transceived from the device so you will you will see the signal level the location like rough location and the time and date when this happened and then of course the data payload which you transferred from the device but you cannot really do like some real time tracking of the device and I don't think this will be implemented because the bandwidth in the network is quite low and once you have like a lot of devices you cannot really transmit a lot of data and that's actually the reason why there is the 140 messages per day limit because if you have like a lot of those devices connected in the same time and you will jam the network so okay thanks what do you what are you going to actually do is to connect the GPS device to the to the whole thing and then you can transfer the GPS coordinates in the in the message you are transferring to the Sigfox network but that's something which is not really connected to the Sigfox network that's like you can transfer this this data so I saw some some application of this for some car alarm system where they connected the Sigfox module with the GPS device to track the car if it's stolen so it's something similar which you can do with the GSM gateway but this could be slightly cheaper and it will draw a drain less power so it's one of the options which you can use it for for example okay no other questions okay thank thanks a lot for coming and I hope you enjoyed it
This talk will give you introduction to Sigfox network and show differences between Sigfox and other IoT networks. It will also demonstrate examples on how you can process collected data. Live demo will be displayed on stage.
10.5446/54719 (DOI)
indicators de dimension Nopeitch Je vais prendre cette opportunité pour Sankh aussi, offert Gaba, qui est ici pour une liste de très aidantes des critiques et des corrections sur la version primaire de mon travail. Je vais commencer par l'introduction. A est un ring commutatif avec un unit. B est une ring finite à l'extension, et finite signifie que c'est présenté finitiellement comme un module. C'est pour ça que vous parlez de la netherienne? Non, non. Il sera bientôt dans la netherienne. Je commence comme ça. Mais la définition de l'extension finite est générée finitiellement? Oui, pour moi ce sera ça. Si vous voulez, vous pouvez prendre une netherienne. Mais pour cette partie de la fin du blackboard, ce sera cette définition. Alors, nous allons considérer la sequence de finitiellement présenté un module. La question du jour est quand cette sequence explique. Donc, premièrement, je me suis dit qu'il y a un sens de la fin de cette sequence. Ou, dans des termes idéal, pour un idéal de A, un idéal de A est contracté. Quand vous considérez que l'Ib intersectée avec A, c'est juste A et vous aussi vous demandez. Et, à partir de toutes les extensions polynoméles de A. Pour chaque extension polynoméle d'algebra A de T1. Dans d'autres mots, l'ideal est un idéal universitaire contracté. Une autre formulation, qui est équivalente et plus catégoriement mindée. Si nous considérons le changement basé entre un module et un module B, donc, c'est juste un producteur de B à A, ce qui devrait être en face. Donc, quelques exemples où cette condition est satisfait. Par exemple, quand la B est flat sur A, parce que vous avez cette bonne condition, et c'est une inclusion, c'est facilement flat. Et puis, la B sur A est flat, mais aussi finitiellement présentée. Donc, c'est projectif, donc vous avez une splitting. Donc, c'est un cas trivial de cette histoire. Un autre très spéciale casque est quand A est un domaine normal. Et le niveau, le niveau est juste la dimension de l'espace vector dont vous avez, par le tensor de la B avec le fil de fraction de A, est invertible dans A, alors cela implique aussi que la séquence de splitting en utilisant une trace divise. Donc, maintenant, la conjecture directe, par rapport aux extères directes de la construction, est-ce que cette séquence split quand A est un régulier ring? Donc, une parole sur cette condition, c'est une condition assez forte, en fait. Si vous essayez de le faire, vous trouverez un exemple de caractéristique. Par exemple, un exemple non régulier. Donc, caractéristique 2, je le copie par extère, donc je pense que c'est ça. Donc, ici c'est A, et ici c'est B. Donc, la conjecture ici est juste 4, donc cette condition ne s'est pas satisfaite, et ça n'est en fait pas split. Donc, je veux dire que cette séquence exacte n'est pas split, alors que c'est une section complète normale. Excuse-moi, quand vous n'avez pas le terme extra avec les cubes, je pense que l'index est faux. Donc, quand vous avez les cubes, vous avez aussi 2? Oui, oui, oui. Oui, oui, oui. Donc, c'est un exemple pour montrer que, déjà pour les surfaces, en caractéristique positif, vous ne pouvez pas faire mal à cette condition importante. Donc, en ce moment, je ne voulais pas vraiment parler de la motivation de cette conjecture. Je vais dire, à moins de quelques mots, qu'il est arrivé à l'exception de conjectures, dans un cométatif algébrique, qui s'appelle les conjectures homologiques, qui ont leur source et leur travail par Peskin, Spiro et Huxterre, et d'autres personnes, qui ont un problème avec les intersections locales, les numéros, etc. Et une autre source est un problème de descent, par Olivier, Renaud, Grouson, etc., où, dans leur papier, le conjecteur n'est pas expliqué, mais c'est impliqué. Par exemple, cette conjecture implique un réponse positif à une question, une question à l'open dans le papier, que l'injectif intégral de la ring de la noterie descend de la flatness des modules. Donc, la conjecture n'est pas expliqué comme ça, en papier, mais c'est expliqué que cette conjecture implique ce que je disais. Donc, c'est une autre direction, une autre source. Vous vous direz que ça est un problème, que le sens de la flatness descend de la flatness, donc, la fin de la case, la state... Exactement. Pour l'intégraire, c'est un problème non trivial. Et puis, il y a un autre papier qui montre que la conjecture implique ce qu'on dit, oui? Non, c'est simplement qu'il y a des papiers qui disent que la conjecture implique est une question que vous pouvez passer de fin de la conjecture intégrale et avoir une flatness descend de la flatness. Il y a un argument dans le papier, donc, il montre ça. Ok, donc, donc, je vais vous dire quelques factures sur cette conjecture. Alors, il y a des... donc, qui sont principalement à cause de l'extérieur, d'exemple de l'un. Donc, on peut réduire le cas où A est un domaine local, donc, régulièrement, un domaine, où B est un domaine. Et donc, maintenant, il y a quelques cas. Donc, en caractéristique 0, vous pouvez appliquer un deuxième argument ici en utilisant le traitement divisé. Ok, la conjecture est vraie. En caractéristique P, ce serait plus surprise. En caractéristique 0. Caractéristique 0. En caractéristique P, c'est aussi vrai, mais c'est plus compliqué. Donc, nous allons donner une recette, par rapport à l'extérieur, nous allons introduire ce que c'est le subring de A, A, P à la fin, entre les brackets, générés par la Power de P à la fin. Et donc, vous avez l'idéal généré par le P à la fin. C'est juste le maximum idéal de ce subring. Vous pouvez le faire à largeur. Oui, maintenant, mais ça va gradually diminuer, donc, s'il vous plaît, et encore. Donc, il est contenu jusqu'à la fin. Donc, quand vous avez pris l'intersection, vous voyez que c'est 0. Je vais juste prouver ce facteur. Donc, maintenant, parce que la B est finie à la fin, il y a certainement une forme linéaire, ce n'est pas 0. Donc, et par... Donc, il y a un élément de B dont la lambda B n'est pas 0, vous pouvez le transmettre par B. Et donc, vous avez une autre forme linéaire, qui a une propre, que la lambda 1 n'est pas 0. Et donc, depuis que vous avez ces propres, il existe une indexe n, que la lambda 1 n'est pas dans la B de la B. OK, donc, maintenant, vous utilisez le fait que la A est régulière, et aussi que c'est un domaine régulier, qui dit que ça implique que la baisse de la B est finie à la fin. Assumer le niveau est parfait. Ah oui, oui, merci. Merci. Vous ressentez, l'un peut assumer, c'est parfait. L'un peut assumer ce n'est pas un problème. Donc, ça veut dire que la A est finie à la fin, à la fin de la B, à la fin de la N. Et cette propre implique, par Nakayama, que cette chose est une partie de la base de A par A, p par Z. Maintenant, vous pouvez, vous pouvez prendre une forme linéaire, mu A, p par Z, linéaire, A, p par Z, qui mapse cette lambda 1 à 1 dans A, p par Z. Maintenant, quand vous restrez du lambda pour B, lambda par Z, p par Z, pardon, ça vous donne une retraction de B, p par Z, à A, p par Z. Donc, c'est l'inclusion ici, c'est juste une partie de cette lambda 1. Donc, vous avez un problème, pas pour A et B, mais pour A, p par Z, p par Z. Mais par la structure de transport, par l'intérêts de Frobenius, vous avez Frobenius minus p par Z, qui vous donne une partie. Donc, c'est un prouvé. Donc, nous sommes en train de faire un cas difficile, ou une caractéristique de mixte, c'est le P, où en fait, on peut réduire une structure de coé par l'intérêts de Frobenius. Mais on peut même imaginer que c'est un renouvel. C'est partie de la longue pièce de B, en 80s. Exactement. C'est un peu... Oui. C'est un peu drôle, c'est un élément canonique. Exactement. Cette réduction est non trivial. Mais ce n'est pas très important pour le secours. Donc, c'était... Donc, c'était... En fait, cet état était ouvert. Donc, ceci était fait en 1973, en fait. Et ceci reste ouvert, d'exception que pour une dimension 3, c'est solvée par Heidmann. Donc, l'élément. Et dans dimension 2, c'est très facile. Oui, oui. Oui, très facile. Oui. Oui. Oui. Oui. Oui. Oui. Oui. Oui. Oui. Oui, essentiellement, parce que vous recruez de la flatness. Le B est normal aussi. Oui, oui, oui. C'est très facile. La 3 est sérieuse. Donc, donc, l'élément de cette décision est d'expliquer, prouver, proposer le début de l'August, je pense, de cet état. Donc, de l'épisode général, de la conjecture directe en caractéristique européenne, en utilisant des techniques parfaites. Donc, je vais commencer à expliquer la stratégie. Donc, peut-être, c'est encore une petite chose, qui sera utile. Donc, pour dire que la conjecture est vraie, ça signifie que, sous certaines circonstances, cette séquence explique. Et en fait, pour avoir cette explication dans le caractéristique, dans l'épisode général, c'est suffisamment d'expliquer. C'est aussi remarqué par l'Aux-Strahe, un genre de argument de Mitterglet, suffisamment de expliquer cette séquence, mode P to the N pour tout N. Ou peut-être M, parce qu'il y a un autre N. Donc, c'est suffisamment de le faire, modulé ou quelque chose. Ou même de la mode de force, de la maximale idéale, c'est le même argument. Oui, oui, je pense, oui. C'est vrai. En fait, c'est le premier à prouver ce que vous dites. Ok, je vais expliquer. Je pense que je vais avoir... Ah, ah, ok. Merci. Donc, je vais expliquer la stratégie. Donc, je vais commencer avec cette prouve en caractéristique P. Vous voyez que c'est quelque chose... premièrement, vous avez un analogue de Frobenius ici, dans une mixte caractéristique pour A. Donc, dans la séquence, je dois le faire ici, dans la séquence, je considère ce cas. Donc, vous avez un Frobenius ici, pas de problème, mais sur B, peut-être qu'il n'y a pas de Frobenius. Donc, vous ne pouvez pas imiter la prouve. Il n'y a pas de sens de quelque chose comme B pour P pour Z. Donc, cette prouve est hopeless de généraliser. Donc, nous allons essayer un autre argument dans la caractéristique P. Donc, deux caractéristiques P pour Y, très court Y, mais dans un cas séparable. Donc, c'est très spécial. Et aussi, pour la simplicité, quand A est... je vais vous donner une V par quelque chose de caractéristique P de la forme, on dirait K, le parfait fil de caractéristique P T0. Et puis, j'ai les autres variables, donc T1, donc Tn. Donc, c'est une question de caractéristique P. Et je suppose que c'est séparable. Et puis, on peut considérer qu'en fait, en appuyant des frais de frein, on peut appuyer un genre de inverse de frais de frein, donc, prendre un parfait closeur. Donc, 1 par P infinity. Et puis, ici, 1 par P infinity. Donc, ceci est séparable. Et, ce qui est l'advantage, ici, pour passer à la preuve de parfait closeur, c'est que la fraise a des propres plus meilleures. Donc, ici, on peut prendre la fraise, dans le sens avant, donc la fraise, dans le fond de frais de frein. Et puis, ce n'est pas surjectif, c'est pas surjectif, c'est pas surjectif. Et tout comme ça. Donc, vous ne pouvez pas vraiment espérer utiliser la fraise pour obtenir une rétraction de cette inclusion. Mais ici, à la preuve de frein, à l'infinité, il y a plus de espérance, parce que la fraise de B infinity est une idéale radicale. Donc, c'est un...... un......... et donc, en particulier, ce n'est pas... ce n'est pas...... une idéale générée par les TIs............................................................................ ndss ndss............ etbles de 1 parce que c'est 1. Et ensuite, il existe...... comme la projection sur le factor mi de l'esprit B en B1 par P de B est essentiellement 1 de la tracé donc vous pouvez utiliser cet élément vous embedez B1 par B1 ici vous avez la map B' de la tracé de B'B et ça va au A1 par P et ici j'ai la projection pour le mi par A et ça s'envole 1 à 1 donc c'est la retraction d'un embedement de A en B donc c'est... somehow l'idée de l'utiliser la tracé encore marche vous allez au parfait closé donc maintenant cet argument peut être un sujet ce que je vous suggère un anologue dans le caractéristique au moins dans un certain condition qui est un peu... un anologue de la probabilité je ne sais pas donc je vais revenir à cet exemple un rang de power series formel de la ring de la valeur discrète et ce cas-ci un cas spécial de caractéristique mix qui a été étudié par Bargain Barth peut-être qu'il a été publié 2 ans plus tard je ne suis pas sûr mais c'est un cas où quand vous invertez P vous avez un étal final étal étal extension donc en ce cas qu'est-ce qu'on peut... en lieu de cette p1 p∞ a∞ euh... on peut définir à∞ à∞ quelque chose que vous avez de la variabilité de la variabilité et peut-être p p∞ et p∞ qui réplace ce truc sera la clé normale ou la clé intégrale de la variabilité de la variabilité oui p∞ a∞ a∞ p∞ ce que vous avez quand vous invertez P alors... vous avez des polynormes ça ressemble à... oh, je suis désolé, c'est A c'est A donc c'est... oui, c'est la même formule de la paracère, parce que dans A c'est A oui oui ok alors... les propres qu'on met ce truc de la variabilité est une puissance d'alimentation qui dit que quand vous avez un truc comme ça, une extension finie je devais en parler vous avez aussi A∞ p∞ p∞ p∞ est encore finie en tout cas oui oui et puis il y a une extra property pourquoi je considère ce truc ce n'est pas un truc parfait mais c'est parfait pour le je tingue une dans la crise et pas souvent donc vous vous shoes je je parle pas son terme job récalculation que, quand on va au p, on peut laisser l'acte frobenus et l'acte frobenus est surjectif. Donc, Falking's almost purity says that in this situation, you can almost remove the 1 over p, so B infinity is, so P1 over P infinity, almost finite et tal over A infinity. In particular, it is almost facefully flat. So, now you have the following diagram, which is analog to this diagram, A infinity B infinity B. So, here, this is facefully flat, in fact free, facefully flat, and here it is almost facefully flat. So, this implies in particular that when you consider the extension, not for A but for A infinity and B tensor over A infinity, this splits almost. Now, one needs some device to remove the almost. And the device is a simple lemma. C'est-à-dire que si R est un ring local, donc le nautarien, le nautarien ring, et S est une extension de la surface, peut-être pas nautarienne, est un modulé finitiellement généré, qui est comme ça, M, un tensor avec S, est élevé par des idéaux importants. J'ai une idéal en S, et vous assumez que cette idéal intersectée avec R n'est pas 0, alors M est 0. Donc cette condition dit que M est presque 0 dans le sens de I, ou plutôt M, quand S est presque 0, vous pouvez deduer que M est 0. C'est très simple, mais c'est crucial, c'est le moyen où vous étiez ensemble le nautarien, avec le nautarien de tout ce qu'il est parfait. Vous appliquez ça, à R, c'est A, et M est la module M générée par l'étude extension, par rapport à notre sequence exaclée. C'est juste A, et ce que j'ai écrit là, c'est que A, vous multipliez A par une puissance fractionnelle de P, A tensor 1, 1 à A infinity, est 0. Et donc la conclusion par ce lemma est que E est 0, donc star spitz. Donc c'est un petit ré-writing de Barthes-Prouves, suitable pour ma extension pour le cas généreux. Donc maintenant, je voudrais garder un peu cette ligne de pensée, mais attaquer le cas généreux. Donc peut-être avant que je fasse ça, je vais faire un commentaire, bien sûr, on pourrait élabérer un peu cette preuve, et on peut faire des assumptions, par exemple, en prenant d'assumer que vous avez vraiment une extension en étal, après invertir P, vous pouvez demander pour une version logarithmique, et ça peut être fait, mais ce que vous ne pouvez pas faire, c'est d'essayer que par une résolution miraculeuse de singularité, vous obtenrez un cas généreux par ce genre d'argument. Parce que la propre spritz, ce qu'est le cas avec star spitz, c'est que vous ne pouvez pas descendre cette propre spritz par un bloc de fausse, par exemple. Donc c'est un cas où je pense que que l'esprit de utiliser le genre logarithmique entre ces contextes est vain. Donc, il faut attaquer le cas où vous avez des ramifications, et surtout, un analog de la théorème de la puissance de la puissance, dans un cas ramé. En fait, je l'utilise. C'est pas important. Je veux juste des choses concrètes. Vous pouvez prendre des pouvoirs de la théorème dans un régularité régulier. Vous avez des coordonnées canoniques. Je vais expliquer un autre cas. Je explique la stratégie, un cas généreux. Le cas généreux, il faut plus un bloc. La théorème de la théorème a l'infinité de la même. Mais il faut considérer quelque chose d'autre. Il y a plus d'un bloc. C'est obtenu par ajouter des routes de la discréditation. C'est très bien le spirit de l'Abiens-Caslemas. Je vais parler de l'Abiens-Caslemas. La théorème de la théorème a l'infinité de la même. La théorème de la théorème a l'infinité de la même. La théorème de la théorème a l'infinité de la même. C'est un cas généreux. Il faut bien comprendre que la théorème a l'infinité de la même. C'est très difficile de le décrire. C'est à l'église de la théorème de la même. Et la théorème de l'infinité de la même est presque ce que vous en pensez. C'est un cas généreux de l'infinité de la même. C'est un cas généreux de l'infinité de la même. C'est un cas généreux de l'infinité de la même. Je dis que c'est un cas généreux de l'infinité de la même. C'est un cas généreux de l'infinité de la même. C'est un cas généreux de l'infinité de la même. Je garde la même notation. C'est une p1 over p'infinité presque finie et étale. C'est une p1 over p'infinité presque finie et étale. C'est une p p'infinité pour nm. C'est une p1 over p'infinité pour nm. En fait, c'est une partie de la même, qui dit que c'est un ring parfait, un tuyau intégral. C'est une partie de la même, qui dit que c'est un tuyau intégral. En ce contexte, on parle d'une mathématique, qui n'est pas une cas généreuse. C'est une cas généreuse, qui n'est pas une cas généreuse. En ce contexte, il y a des techniques. C'est vraiment en espérance de l'Abiens-Caslema, pour avoir une sorte d'étal, en invitéant un discriminat, vous ramifiez le discriminat. C'est un pg. Je ne peux pas dire beaucoup de choses, c'est plutôt technique, mais on utilise l'idéo. Je considère le principal parfait de son espace, attaché à une p1 over p. Je considère le complément de la neighborhood tubulière de la devise G equals 0. C'est un domaine rationnel, plus grand que p2g. Alors, il se termine comme p2g, pour plusieurs gènes, 1, 2, etc. Le point est que ce rang est en fait un limites de localisation, p2g, ce qui est le rang des fonctions, qui sont boundées par une sur ce domaine. C'est une version parfaitée de l'extension de Riemann et vous avez la même forme B. Mais ici, vous avez un algebrae parfait de l'algebra, mais vous êtes dans une situation où vous aviez l'envergation. Vous pouvez appliquer des puissances, des variantes, comme vous et Schultz, pour ces extensions. Vous avez quelque chose d'alimentation finie, le problème est d'aller au limites, ce n'est pas facile. Je utilise une technique galvaniste, qui est très utile et simple. Pour montrer que quelque chose est presque finie, ce n'est pas facile. La seule façon de montrer que c'est galvaniste, c'est juste une équation, ce n'est pas une propriété. C'est un standard dans un algebrae computatif, si vous avez une extension finie et une groupe finie, comme la variante R, vous pouvez écrire une map, une map de S, une pièce de S, par l'élément de la groupe, comme l'imagine, comme une théorie galvaniste. Si la map est isomorphiste, si l'isomorphisme est finie, la table S est étale. Vous avez la finitétise pour 3. Vous avez juste à proposer quelque chose d'isomorphisme. Vous avez une finitétise, c'est presque finie. Maintenant, pour avoir une isomorphiste, vous devez prouver que certains idées sont dans cet algebrae, mais vous savez que les théorhymes sont dans tout cet algebrae, et ils sont tous compatibles, donc ils vont au limites. Par contre, ce genre de choses peut simplifier la preuve de la priorité de les fâchés. Le point de vue est juste de prouver que B, si vous avez un R, est parfait et si vous savez que S est parfait et la extension est finitétale, quand vous invertez P, vous pouvez avoir presque pour 3 la finitétalité. Vous avez réduit le cas de GALWA, parce que ici vous savez que... alors, nous allons le faire. Les fâchés sont presque pure, donc une petite proposition. Vous avez une finitétale extension de l'algebra de l'algebra, donc si vous avez un élément intégré, donc un 0 à un 0 est presque une finitétale. Donc il y a une réduction standard à la case de GALWA, en utilisant un torsor S, vous réduisiez le cas de GALWA, donc réduisiez le cas de GALWA. Maintenant vous avez appliqué ce truc, donc ce que vous savez est S-S-S-R, c'est un sémorphique S. Et ensuite vous avez obtenu l'élément intégré, mais il est connu, donc au moins, si vous avez obtenu l'élément intégré, c'est presque le même que le produit de la finitétale. Donc ce sont les propres propres d'algebra. Vous pouvez conclure, d'exemple pour cette complication, ici il n'y devrait pas être complication, donc le truc est de réduire le mod P pour dire que ce soit OK, ou mod PN pour aucun end, et quand vous avez quelque chose de finitétale, ou presque finitétale pour mod PN, pour tout N, vous pouvez aller au limit, vous avez un peu de grottinique équivalent remarquable, mais comme Offer Gabber pointait à moi, vous ne pouvez pas faire ça dans la situation générale, où vous vous dealz avec cette idée, parce que P n'est pas contente, cette idée. Donc ici vous avez aussi besoin de obtenir le premier fond de finitétale, vous avez besoin de prouver que vous vous êtes en finitétale, ce n'est pas parfait. Vous avez besoin de prouver que le S est parfait. Donc c'est une partie de ça. Oui, mais par exemple, dans Kedlaia's approach, ils prouvent le premier, et ensuite ils utilisent tous les restes de robes à roues pour obtenir l'autre. Je suis sûr que cette petite proposition, dépense l'auteure de toutes ces roues à roues. Scholtz a une autre façon de faire, il prouve les deux choses simultanément. Ok, donc je vais expliquer maintenant peut-être la deuxième partie. Donc, ça vous donne ici. Donc je vous imite, mais... Donc, à moins que je... C'est presque facile, flat, à moins que mod P pour M pour tout M. Je vous ai demandé un question, j'ai oublié un peu ce que vous avez, donc vous avez des complotations et des ordres, qui sont importantes dans ce... Donc quand vous constritez, vous ajoutez G1 par P pour la N, vous pouvez ajouter un algebrae, et puis vous pouvez... Et puis vous pouvez passer la limite par N, piédiquement complet, mais maintenant, ce n'est pas uniforme, est-ce que... Donc il y a un concept uniforme par un algebrae. Oui, vous devez prendre celui-ci. Donc c'est le boulot unique pour la norme spectrale. Ce n'est pas bon. Ok, maintenant, est-ce que... C'est-ce que la norme sur celle-ci est équivalente au respect de l'Allah, ou même pas? Dans ce simple cas, c'est vraiment une norme spectrale. C'est une norme spectrale que je considère. Quand vous avez juste la complication, avant, vous avez vu ce que vous avez fait, donc comment vous avez... La complication est sur quoi? Après 1 par P, où est la complication dans la formule? Parce que ça dépend... Bien, laissez-moi prendre comme ça. Oui, je pense que ça ne fait pas de différence en ce cas, mais en aucun cas je veux dire ça. Je veux dire, quelque chose de complet. Ok, si ce n'est pas fini, ça ressemble à OK. Mais non, la complication est pas compliquée, si vous prouvez que le Piatik n'est pas un element de spectra, non, c'est 0 dans le Banner Haribah. Oui, je pense que c'est ok, mais je ne veux pas ça. Vous pouvez oublier ce qu'est le problème, donc je considère vraiment un algebrae parfait. Et vous ne prouvez pas que le Piatik n'est pas un formule unif, je ne l'ai pas oublié, c'est pas... Ceci? Ce n'est pas. Non, non, c'est pas. Non, non, c'est absolument pas. Ce qui est plus grand que ceci. Ok, pour finir, si vous parlez d'autre part, vous savez que c'est libre, donc c'est facilement flat, ce que vous avez besoin est de savoir quelque chose de flatness. Et puis vous pouvez copier l'argument, qui est peut-être encore là. Non. Non, peut-être pas, mais avec des extensions. Donc, pour expliquer quelque chose, vous voyez, c'est le dernier, mais important point. Ce n'est pas le plus difficile, mais peut-être le plus surprisant, parce que cette extension, comme je l'ai dit, est un peu mystérieuse à ce niveau fin. Donc, ceci est le dernier point. Je suis en train de rejoindre les routes de G. Donc, le troisième est que ce âge de bras est p1 over p∞, presque facilement flat, par une infinité. Donc, si vous avez ça, vous pouvez copier exactement l'argument que je l'ai dit avant, mais juste qu'il y a un flamme. Donc, et ceci est un peu surprisant, parce que, comme je l'ai dit, si vous considérez, par exemple, si vous considérez, juste que vous ajoutez quelques... un square de G à A, et invert P, et vous avez le même. Donc, ce n'est pas... ce n'est pas connu, ce n'est pas connu, que ce soit flat, sur A. Vous le faites pour une fin de gif, ou pour toutes les autres? Ceci est... Non, pour une fin de gif, même pour une fin de gif, ce n'est pas clair. Donc, en fait, il y a peut-être un exemple de compteur, en fait, je pense, pas exactement pour... donc, peut-être, pour faire un exemple de compteur, peut-être que je dois aussi invertiser, prendre le square de P. Et peut-être que ce n'est pas flat, et que le conjecteur de dirange implique que, au moins, ce n'est pas... Mais ici, aller à l'infinité fait... un truc... simple. Donc, la idée, je vais juste dire ce que c'est. La idée est de mettre une variable T, et regarder le toit parfait. Donc, c'est une idée géométrique, comme... un autre casque. La place de parfait est attachée à l'infinité, une T, comme ça. Donc, vous avez cette place de parfait, et vous avez le deviser, si je suis essayé, T est equal à G, et vous considérez une petite neighborhood tubulaire. Donc, T minus G est equal à B à I. Alors, la dévise qui est la prouve est de reinterpréter cet algebre qui est construit par le renouvel de la route G et qui est complétée, et qui a pris des logeurs intégrales, dans le renouvel que vous avez quand vous invertez le P. Alors, c'est possible de considérer comme un limiter direct de les renouvels de fonction, donc, ces fonctions sont boundées par une sur cet domaine. Donc, vous devez prouver que ces renouvels sont en fait, presque facilement flat sur A d'infinité. Vous devez toujours prendre le ballon un peu plus bas dans ces six, oui, mon notation est très faible. Vous devez invertir le P et attaquer les renouvels. Oui, par ce point, je veux dire, oui, je invertis le P et je pense que vous avez besoin de ballons, qui montrent géométriquement, qui disent que je prends toutes les fonctions boundées par une géométriquement. Oui. Alors, il faut utiliser les choses qui sont parfaites pour écrire l'arrivée de l'approximation de remplir ce T-minus G par quelque chose qui admite les routes et puis d'utiliser une autre proposition de Schultzer, qui décrivent explicitement ces localisations, à moins, à l'alimentation. Donc, le point est que par la théorie de la perfection, vous pouvez décrire ces renouvels en allant à l'infinité, à l'infinité de beaucoup de routes de G, à moins explicitement par un double collimètre, un complet collimètre, et puis à l'enjeu, vous pouvez prouver par quelque computation, que ça joue le rôle de constance ici, donc c'est résonable d'avoir un effort de plus en plus d'effort. Et le signe égal est précis ou de plus en plus, parce que vous êtes fonctionné par un, et vous vous étendez à un entier ou peut-être le bâin est un peu plus plus haut, donc peut-être que vous avez des qualités différentes. Non, non, c'est ok comme ça. C'est ok. C'est quelque chose de ce genre, et c'est un complet collimètre de quelque chose de ce genre. Je veux dire Non, donc quand vous vous appliquez à la balle unique, la limite de balles uniques quand vous vous compliquez vous avez quelque chose qui devrait être essentiellement le step, mais vous ne vous inquiétez pas sur la trajectibilité, parce que la fonction c'est... Je veux dire, même sans prendre la balle unique, si vous êtes un algebrae, vous vous inquiétez. Je dis que ce est le limit de collimètre dans la catégorie de l'algebrae de ce genre. Et puis je vous dis que c'est le limit de collimètre, parce que vous ne pouvez pas, en général, si vous vous inquiétez quelque chose, il peut être approximé par quelque chose dans un stage final, avec la norme très proche de ne pas être nécessairement equal à 1, mais pas exactement, je ne suis pas sûr. Donc, peut-être, c'est suffisant pour votre purpose. C'est suffisant, et je pense que en ce cas, cela fonctionne de toute façon, mais on peut en parler. Si vous voulez, vous pouvez mettre quelques questions à la question de Tokyo, puis de la Belgique. On commence par la question de Tokyo. Je ne regarde pas. On regarde. On le voit. Ok, donc Tokyo. Pouvez-vous vous dire que vous avez des questions pour la question de Tokyo, puis pour la Belgique. On commence par la question de Tokyo. On ne regarde pas. On le regarde. Ok, donc Tokyo. Donc, vous avez des questions pour la question de Tokyo? Non, non, pas de questions pour la question de Tokyo. Non, pas de questions pour la question de Tokyo, mais de la Belgique. Vous êtes de la Belgique? C'est une question stupide. Oui. Stupide question. Donc, généralement, dans la histoire vous voyez des têtes qui se sont occuper, mais ici nous ne l'avons vu pas. Donc, donc... Ok, donc... Donc, est-ce que c'est hésin? Oui, c'est hésin, oui. C'est hésin tout le monde. Par exemple, tout le monde. Donc, quand je dis que c'est cette propre ici, par exemple, cela est prouvé par le tilt. Donc, beaucoup de choses sont prouvues par le tilt. Je... Oui. C'est sous le carton. Non, le tilt est très utile dans toute cette histoire. Ok. Pas de plus de questions de la Belgique. Ok, merci. Les questions de la Belgique? Oui, en rafraque, vous avez également mentionné quelque chose sur Bitcoin, Macau, Ah, merci pour la question. Je n'ai pas eu le temps de mentionner. Donc, je vais retourner à la motivation originale. J'ai mentionné les conjectures homologiques. Il peut être juste un peu plus précis. Donc, en fait, donc, c'est un problème basique et des algebras commutatives que pas tous les rangs sont coïnmecoles. C'est tellement sympa de avoir coïnmecoles rangs. Mais on peut essayer de mapper un non coïnmeculeur rang pour un coïnmeculeur rang. C'est un de ces rangs locales. C'est un de ces rangs que c'est pas un tricot. C'est la meilleure idéale de l'ingénie originale. C'est pas mapper à 0. Ok. En général, c'est hopeless d'expecter que vous faites cela dans le monde de la Nautarité. Donc, c'est un grand coïnmeculeur algebras, donc que que tout le rang de la Nautarité soit mappé. C'est en fait un conjecteur par l'extérieur. Je pense qu'il existe toujours. Et c'est plus plus simple, c'est un conjecteur direct. Et c'est ce que je veux dire, ce que je veux dire dans le abstract, c'est que par combiner ces techniques avec des tools, par rapport au extérieur, on peut évoquer l'existence de la grande coïnmeculeur algebras, ce conjecteur. En fait, le seul cas ouvert est le cas de la grande caractéristique. Et c'est ce que j'ai fait avec le second paper et la version prépublée. Donc il y a une section sur ça. Donc, il y a un conjecteur fort qui implique tous les conjecteurs homologiques, mais c'est toujours ouvert, mais je pense que c'est très accessible. Ce grand coïnmeculeur algebras, qui existait maintenant, devrait être fonctionnel week-end. Donc que si on a un map local de la grande caractéristique, on devrait pouvoir construire un grand coïnmeculeur algebras, pour faire un commut certaine. Je peux presque le faire, d'excepter que c'est un problème très agréable, lié à ce algebras, que je ne sais pas, c'est ce algebras, mais je vous rappelle, c'est essentiellement, vous avez une V et vous ajoutez tout ce P, ce T P1 et P1 et P1, et vous avez une P1, et vous avez une P1, et vous avez un P1, et vous avez la laitière de cet alignement. C'est-il mon morphisme? Je ne sais pas. Donc c'est juste une intersection. Comme ça, de toutes ces modules... En fait c'est un ring. Et c'est... je ne sais pas en fait. C'est une question très simple, et encore une fois, si c'est le cas, ou au moins si vous ne pouvez choisir dans un cas-ci, un cas-ci est comme ça, je pense qu'on peut prouver par les mêmes techniques, la fonctionnalité weak aussi. Peut-être juste une question philosophique, l'homospérité en début est vraiment motivé par la compétition d'étalicorhomologie. Donc, dans le cas-ci, vous prouvez pour cette homospérité, l'espérateur avion Carlem, a-t-il des conséquences en étalicorhomologie? Peut-il s'encomputer plus, ou peut-il simplifier les parts de la compétition de l'étalicorhomologie? Je ne peux pas... Oui, je comprends la question. Je ne peux pas vraiment répondre par une très concrète réponse, mais je peux dire quelque chose dans cette direction. À partir de maintenant, pour appliquer les expériences d'étalicorhomologie, vous devez avoir des coordonnées, vous devez externer des rues de coordonnées. Donc, en quelque sorte, vous devez être dans une situation historique. Donc, vous devez... Oui, mais c'est très... C'est un peu maitre, parce que c'est difficile de marcher, et vous devez réduire la situation historique. Ici, ça vous donne une façon de construire une expérience almost perfecte, pour les enveloppes, pour les... les... les algebras, par... par utiliser la nautaire... Comment dire? Nautaire, oui. Comment dire? Nautaire, les marseaux. Normalisation, les marseaux. Donc, vous avez un affin, un algebre, un algin, vous devez utiliser la nautaire normalisation. Donc, vous... Vous devez... ici. Donc, ça vous donne... Vous commencez avec un big, qui est un affin, un affin, un affin, un algebre, vous devez... une nautaire normalisation. Et vous avez un discriminant ici, dans la... C'est à dire 0, c'est-à-dire. Donc, c'est sur le PADX. Et puis, vous pouvez jouer ce jeu, construire A à l'infini, et le B à l'infini. Et ce jeu, il y a B, c'est presque un perfectoïde. C'est un moyen de construire beaucoup, beaucoup de perfectoïdes, ou presque de perfectoïdes, avec la nautaire réduisant la situation de tournage. Donc, dans ce sens, peut-être, c'est utile de compter la comologie, avec la nautaire réduisant les petits... des sub-sets. Merci. Il n'y a pas d'autres questions. Merci beaucoup.
Séminaire Paris Pékin Tokyo / Mercredi 2 novembre 2016 According to Hochster's direct summand conjecture (1973), a regular ring R is a direct summand, as an R-module, of every finite extension ring. We shall outline our recent proof which relies on perfectoid techniques. Similar arguments also establish the existence of big Cohen-Macaulay algebras for complete local domains of mixed characteristics.
10.5446/54721 (DOI)
Thank you, Hamid, for the introduction. It's a pleasure and honor to be speaking at this seminar. Today I want to discuss about the so-called equivalent amount of one number of conjecture with coefficients in HECA algebra. So these form a complex web of interlocking conjectures, so I'd like to start with concrete questions in one specific example. So fix for the moment p equal to 3 and I consider the following three Eigencast forms. So f1 is of weight 2 and level gamma 014 and it is the Eigencast form attached to the elliptic curve of equation y square equal x cube minus 7x minus 6. f2 is also of weight 2 and level 716 and this one is attached also to an elliptic curve. The elliptic curve y square equals x cube minus 67x plus 926 and f3 is of weight 4 and level 1640 and coefficients in a rather large extension of q. And these three Eigencast forms are all congruent modulo p. So let me say a word about why I chose this one. I chose this one because they are congruent but they are not obviously related in any other way, meaning they are not at the same weight, at least this one is under the same weight, so they do not occur in the comology of the same geometry object. Also, neither of them is free ordinary, so they do not belong to a common HIDA family and in fact this one has a3 equal to 0 whereas these two has a3 different from 0 so they also do not belong to a Coleman family. So they are congruent but apart from that there are not many relations between them. Let me introduce a few further notation. So for f4 it means for the one s4 it means congruence relative just to the usual q expansion. Yes. Is what you mean? Yes. Yes. So in fact if you want to be more precise because this one has not q expansion in z, there is a number field and a single prime over 3 such that this congruence holds. The coefficients are in an extension of degree 15 and there is a single prime in the ring of integer of the extension over 3 such that this congruence holds. More precisely. Is that correct? Yes. So let me introduce a couple of few more notation. So q infinity will be the zp extension of q. Gamma is Galois group and as usual lambda will denote the complete group algebra zp double rocket gap. And we will be interested in the variation of the special values of these functions. So more precisely we consider the L function of fi twisted by some character chi evaluated at 1. And because I'm interested in the Tiedig variation I will remove the order factor at p and I will quotient by some period. And here chi is a finite order character of gamma. And I want to ask the following questions about these values. So first can we make sense of them? In terms of algebraic objects like in the Bursche and Sundaric conjecture for instance. So for f1 and f2 this is the Bursche and Sundaric conjecture but for f4 as many people know this is the Bloch-Cato conjecture. So this notation means that you don't consider the factor above. The order factor at p is removed. This is important because it is not defined or because the conjecture should be just. So I am going to state a precise conjecture but the Bloch-Cato conjecture applies to any, well the version of the Bloch-Cato conjecture that I will present applies to any partial function. Second can we predict something about these values? So for instance can we predict the Pietic valuation? And I want to distinguish these two questions because for instance the Bursche and Sundaric conjecture says that the Pietic valuation of this special value is related to the order of Shah but of course the order of Shah is hard to compute. So we express something hard in terms of something hard and here I want to know if we can say something definite and easy about this Pietic valuation. Three, because these forms are congruent we could ask are they congruent? Modular Pi. And finally four, are they related in any way? So can we exploit these congruence to say using these values, can we say something for one i, can we say something for the other? So at least about question one, can we make sense of them? There is a very definite answer in terms of the block-Cato conjecture for the motive VF attached to a diagonal form F. So this is the motive constructed by Scholl in 1990 and in terms of its free realizations, the Petty realizations, the RAM and for NEP the Italic Realization. So the only thing that I will need is that inside the Petty Realization there is a class that I will call delta F and what is delta F? It's the path from the cusp zero to the cusp infinity on the modular curve and thanks to Euler-Ponkker's inequality you can see this as a class in the Petty Realization of VF. And so let me state the so-called equivalent amalgam number conjectures with coefficients in lambda for the forms F. This is a conjecture for the following. And it says the following. If you take an O lattice inside the Italic Realization and if you put HI Iwasawa to be the etal tomology, a spec Z with P inverted with coefficients in T tensor lambda Iwasawa, then first of all H2 Iwasawa should be torsion as lambda module. H1 Iwasawa should be ranked one and all the other HI should vanish. And then there exists a class ZF inside H1 Iwasawa satisfying two properties. A, the characteristic ideal of H2 Iwasawa, which is defined because H2 Iwasawa is torsion, should be equal to the characteristic ideal of H1 Iwasawa over ZF. So these are all lambda modules. So that's the first property. And the second property relates ZF to special values of L function in the following way. You fix an integer N and an integer S between 1 and K minus 1, where K I recall is the weight of F. And then first of all you can project H1 Iwasawa in the etal tomology of Z with P inverted but with roots of unity added with coefficients in VF. Then you can localize this group at P. So this gives the Galois tomology of the extension QPZPS with coefficients in the same sheet. Then there is a dual exponential map from this group to the d-deram 0 of VF everywhere at CbFP. And that is isomorphic to a certain space SF, which is a space of cuss form on which the Hecker algebra acts by the eigenvalues of F. Tensor over F with FPGN. And here what are my notation? So F has coefficients in a number field, big F, and P divides P. NGN is the Galois group of this extension. So you have these maps and ZF is in this group, so you can send it into that group. And the first property, the first fact is that the notation on the, is it a question mark? No, no, it's isomorphic too. Description of an explicit description of this space. And so ZF is sent to a rational subspace inside this PLX space. And furthermore, now that we know it's in this rational subspace, we can transfer it with C. And that's, in fact, if Chi is a finite order character of gamma, there is a Pyrgis map, depending on Chi, which sends SF tensor with C. GN to the materialization of F. And let me recall that there is a class delta F in here. And this sends ZF tensor 1 to the special value of L of the dual motif of F, twisted by Chi and evaluated at S. So this class ZF and the element H2 Iwasawa compute all special values of the motif VF dual, twisted by Chi and at the integers S. So that's the statement of the conjecture. And it answers, it settles question one above. Can we make sense of these special values? Well we can in terms of H2 Iwasawa and this class ZF. Let me recall the following theorem, which is due to Cato in 2004. But in this conjecture one, in this conjecture one, the statement 1 and 2B are true. And under my hypothesis, so remains statement 2A, so this statement. And Cato proved that the characteristic ideal of H2 Iwasawa divides the characteristic ideal of H1 Iwasawa over ZF, under my hypothesis. And in fact, so the property is it, one knows that the map of such a deep defiance you can more or less. So what, you mean the precise meaning of this theorem? So 2B means that the image under certain maps is equal to something and this means just that the something is in the image. So Cato constructed the class ZF. He showed that the image under these maps is indeed delta F times the special value of the function. And the map is also injective or not? So many of these maps are isomorphisms but not all of them of course because this one is a lambda module and then you. But the injective, I was going to say about injective because. Yes it is injective. Because rank 1, okay. Yeah. So under this my hypothesis actually H1 Iwasawa is free of rank 1. But it is always torsion free. It is always torsion free of course. Okay. Okay. So with that conjecture and theorem in hand I want to comment a bit again on our forms F1, F2 and F3. So for them it is actually not hard to see using Cato's theorem that conjecture 1 is true for F1 and F2 and that the special values, sorry, S equal 1 for them are all PID units for 1 and 2. So using the theorem it is not hard to establish this statement. But the special value of F3 at 1 is not a cadet unit. And so we see that first of all special values are not congruent even though never, this is never, this is always a cadet unit independently of card and this is never a cadet unit. And so we see that these values are not congruent and we see also that in so far as these values shed light on these values this cannot be an obvious process because of this discrepancy. And so the question. So are you claiming for all kinds so these values are not, that's unit for all kinds? Yes. All kinds, five-order character. But you are now considering the super singular case? Yes. A3 equals 0. Yes. Is that okay? Isn't it? No? That's wrong. These are not the values interpreted by a cadet function, right? So piadical functions for super singular case for example plus minus piadical functions should be unit, that's okay I think. Yes. But this is not trivial usually. Okay anyway, so Takeshi said that your 1, 2 is not so good. No, that's okay. Oh, you cannot read. No, for a minute, yeah. Okay. Okay, so in order to relate these values we have to introduce Heiko Algebra. So that's what I'm going to do now. Sorry, a question from Beijing. You mentioned that some of them is ordinary but some are not ordinary, right? No, no, no, they are all not ordinary. All not ordinary. Yes. Okay. On the other hand, one of them has A3 equals 0 and the other two have A3 different from 0. So one of them is infinite slope and two of them are finite slope. Okay. So now from now on P is not 3 anymore, it's just any odd prime and I consider robot and irreducible modular GQ representation which is unremifed outside the finite sets of primes which I will denote sigma, robot. And I will also fix a finite set of prime sigma which contains sigma, robot but which might be strictly larger. And I'm going to introduce the following Heiko Algebra, T sigma. So this is the inverse limit on the weight of Heiko Algebra of weight K and here T sigma K is the reduced Heiko Algebra generated by Heiko operator's TL for L not in sigma, generated over some discrete variation ring O. And this is inside the endomorphism ring of all modular forms of a certain level and weights less than K. So to robot corresponds a maximal ideal M of this Heiko Algebra and I mentioned that the localization of T sigma at M is believed to be equidimensional of crew dimension 4 and this is often known to be true. One more notation, if A is a minimal prime ideal of the reduced ring T and sigma then I will write RA for the quotient, the integral domain quotient T sigma, T and sigma over A. And I would like to briefly draw these rings. So starting with RA, so we have a new reusable component like this and what I've drawn here is more precisely the spectrum of RA with P in 13. And if I want to draw the full spectrum of T and sigma then maybe there's another reusable component crossing the first, maybe something like this and so the full picture is the spectrum of T and sigma with P inverted. And because I've inverted P these rings are dimension 3 but I've drawn surfaces. So the meaning being that at any point on this surface there's an extra dimension which corresponds to the cyclotomic deformation or the cyclotomic variable. So if this point corresponds to a modular form F then conjecture 1 describes the special values the L function on that line. And now if I assume that I'm looking at my F1 and F2 and F3 but it's easy to see that F1 and F3 are on different irreducible components. So conjecture 1 describes the special values on that line and on that line but I want to relate them so I need to move first on this surface and then from that surface to that surface. And that is the point of the equivalent amygdala number conjecture with coefficients in HECA algebra. So now I have one last thing on the whole space. So what rings are you looking for? You said that there is this HECA ring T sigma. Yes. This is the localization of the HECA algebra. Okay. Or completion. Okay. But then the thing that you describe are irreducible components. Yes. Okay. And what is the line? So where is it? Okay. So in fact R A is dimension 4. So R A1 of a piece I mentioned 3. So in fact this R A is not a surface it's a space. Okay. But one dimension of this space is due to the presence of possible cyclotomic twist. So all these rings are lambda algebra. Well lambda is this ZP double bracket gamma. So from my picture I've removed this dimension but you have to imagine that on each point there is this line corresponding to a twist. So if F1 is this point, this point would be F1 tensor by some character I can't. But it still lies in the irreducible component. It still lies in the irreducible component. You do as if it is okay. Oh no. Yeah. As if I were moving outside of it. No. Oh, irreducible component are spaces. Dimension 3. Okay. Okay. And there is no preferred way to cut down dimension 2. See. Okay. Well, but I... Okay. So it's not just... Okay. Okay. So here is the conjecture. And this is an equivalent of a number conjecture but with coefficients in this vector ring TM sigma. And one possible formulation is that there exists a Z sigma and a delta sigma and for all minimal prime ideal ZA and delta A, delta A, such that first of all the natural projection from the vector ring onto RA induces an isomorphism from delta sigma tensor RA with delta A which sends Z sigma tensor 1 to ZA. And second, if lambda is a modular point of RA, so if this is a system of eigenvalue of a eigencous 4, then delta A tensor with lambda I was our should be canonically isomorphic to the following objects. You take the determinant of the complex, the ethylcology complex of Z with P inverted with coefficients in some lattice, some U.S.S.S. lattice in the chronology of the module of 4. And then there is a supplementary term which is this lattice and you take the plus part, so the invariant part under complex conjugation. So we should have this isomorphism and such a canonical isomorphism and this canonical isomorphism should send ZA tensor 1 to the class ZF of the first conjecture and tensor with the dual of the delta class of the first conjecture. So meaning, meaning that if you take the period map of the image for lambda of ZA tensor 1 for any modular point lambda, then this should be equal to the special value at P of VF star 1 chi S. So if you remember the first conjecture, there was an extra class delta here and now I have incorporated it on that side so that it disappears on this side. So this conjecture roughly states that this class Z sigma can compute all special values of all modular forms appearing as points on the ring TM sigma. That's exactly the right question. So what is delta sigma? So of course yes, so this conjecture is quite meaningless if we don't at least propose a candidate for delta sigma and for the Camelicola isomorphism to appear. So here is the answer. You take UP, a compact open subgroup into the other in the other point of GL2 outside of infinity and define the completed homology of theme level UP and coefficients O as the inverse limit on all UP, U lower P of the homology of the modular curve of level UP, U lower P, U upper P with coefficients in O. So here UP is compact open in GL2 of QP. I take the direct limit on all this level and I point out that inside this completed homology there is a free module of rank 1 Z sigma module over the action of the Hecker algebra which is generated by a certain class delta sigma which is more or less again the path from zero to infinity but seen in this completed homology. And with this Z sigma we are going to build the delta sigma. So delta sigma, at least a candidate for delta sigma. Delta sigma is the determinant of the Hecker algebra of the et al. homology with compact support of Z with sigma inverted with coefficients in the Galois representation T sigma with coefficients in Hecker algebra, tensor the determinant of Z sigma, this Z sigma minus 1. And this is how you should think of this object. So this is of algebraic origin or homological origin. So you should think of this as homologic as algebraic special value. You should think of this part as the part predicting the special value. And you should think of this part with the delta in it as the actual analytic special value. And so the conjecture, one way to rephrase the conjecture is to know that it amounts to this part, the determinant of the part is canonically isomorphic to some module X sigma inside the total fraction ring of Tm sigma. The second part likewise, it's a very second an equalizer morphism to some module Y sigma inside this total portion ring. And so inside this total portion ring, X sigma tensor Y sigma is equal to the Hecker algebra. So we have a statement like this and all these isomorphisms should be compatible with the maps, first from Tm sigma to an irreducible component and from an irreducible component to a modular part. That's a way to understand the conjecture is that this Y sigma is the periodic variation of the actual special values. This X sigma is the periodic variation of the formula for predicting the values. Thus this statement is that they are equal over the full Hecker ring and this compatibility says that if they are equal over the full Hecker ring, they are equal at each regular point. That's how to understand graphically the conjecture. So before stating a theorem on this conjecture, let me illustrate it by going back once again to our example. Yeah, so T is actually, I meant to write it down but then forget. So T subscript sigma is the Galois representation with coefficients in the Hecker algebra. So it's the big Galois representation. So back to our forms F1, F2 and F3. So which belongs to different irreducible components as I said before. So for them sigma will be the set of prime 2, 3, 5, 19 and 41 and I will point out that the conjecture gives us two ways of computing special values now because if you want to compute special value, you can take the class Z sigma inside delta sigma and then you map it to ZF tensor delta F star. So you raise the top board so you can compare these. Yeah, so the first, okay, so the composition of these two isomorphisms, one to the irreducible component then to the modular point, tell you that Z sigma has to go to ZF tensor delta F star which for the period map maps to the special value with P removed. So in my case S will be 1. But you can also take delta sigma inside Z sigma and map it to some delta sigma F star that will be inside the materialization of the motif VF and then compute the special value with respect to this class. So maybe you can answer it with ZF and that one is mapped through the period map not to the same special value but with the special value with all other factors that sigma removed. So this L sigma is by definition the product for all prime sigma over all prime sigma of the other factor at this prime L times the full L function. So we see a difference. We see a difference in these two process and by exploiting this difference we can now explain what we noticed earlier because if you take L equal to 41 then you will see that the other factor at L of the first of our modular forms is not a free attitude. And if you translate, so this means that this L sigma, this L sigma and special value with sigma all the factors removed, so this L sigma will have a non free attitude part here. But if you translate this in terms of the module Y sigma above you will see that this means that the module Y sigma is not isomorphic to TM sigma which is perfectly fine in terms of the conjecture because it's just X sigma tensor Y sigma which should be isomorphic to TM sigma. So this means that X sigma is also not isomorphic to TM sigma. The product of all Euler factors for the modular form F3 that is a Piedek unit. And so again if we translate the fact that Y sigma is not TM sigma plus this statement implies that the Piedek valuation of this special value is strictly positive. And in fact it implies that this Piedek valuation has to be exactly equal to the Piedek valuation of this Euler factor. So this is very briefly speaking how the conjecture can relate the special values and the Euler factors of one modular form to the special value and Euler factors of another modular form under the hypothesis that they are congruent. So now let me state the theorem. So I remind you that P is odd and I will state the theorem under simplifying assumptions chosen for brevity. So assume that the Galois representation rho bar is subjective so it images all of GL2 of some finite extension of Fp. And also that the local Galois representation rho bar restricted to Qp is irreducible. And then there's a mild condition at primes which are in sigma but not in sigma rho bar. And I will state it if needed in the sketch of proof. You didn't explain how you chose sigma. No, no, you chose sigma arbitrarily. But in the conjecture. But for this theorem, well I can tell you the condition. Do you want to? Well I won't understand what this 41 is. Yeah, so the theorem will apply to my example of course. So the condition is that if L is congruent to plus or minus 1 modulo p, so of course when p is equal to 3 this is going to happen every time. Then you have to know something about the shape of the local Galois representation at L. And in fact if L is equal to minus 1 modulo p, which is the case here I guess, the local Galois representation has to be reducible. Which is also the case because 41 divides only 1 1614, so you are Steinberg at L. But if you had the modulo form, so of course not with these conjectures, but with a super singular, super-quistfidol, I mean local Galois representation at 41, then the theorem would not apply. You will, I mean at least the proof would not apply. And four, there exists a modulo point of this Hecker algebra such that conjecture 1 is true. So let me remind you that conjecture 1 was the original, the original, the equivalent of omega 1 number conjecture of Kato. Then conjecture 2 is true. So the full equivalent of omega 1 number conjecture with coefficients in Hecker algebra is true. And in particular conjecture 1 is true for all modulo points of TNC1. So if you apply this theorem to our example, as I mentioned, it's easy to check that the conjecture 1 is true for F1, and so it implies in particular that it is true for F2 and F3. But I should point out that conjecture 2 is stronger than the sum of conjecture 1 for all modulo points. Because conjecture 1 for all modulo points describes the variation just in this psychotomic line, whereas conjecture 2 describes the variation on a full Hecker ring. So it's conjecture 1 for all modulo points plus congruences between these values. So in the remaining 10 minutes or so, I will briefly sketch the proof. That's correct. Eight minutes or so. So the proof proceeds in three main steps. And the first step is descent. And this means, so this can be sum up in the following proposition. The modules X sigma and Y sigma of the conjecture are compatible with the mapped TN sigma onto RA and modulo maps RA onto lambda I wassawa. In the following sense, X sigma times Y sigma is sum module M inside the total portion ring of TN sigma. So the conjecture says that it is TN sigma but sum module M. And you can specialize on the left and specialize here. And this is a community diagram. And same with this map. So once you know the relative position of X sigma and Y sigma, you know the relative position of our images at all module points that already reduce the congruence. And I should point out that neither these maps, X sigma does not go to XA and Y sigma does not go to YA. But the tensor product goes to the tensor part. And the idea of proof of this statement, so it's the tensor product of what? This is about TN sigma. And below is over RA. This is inside RA. The fraction ring is the finite extension of RA. It's an irreducible component of RA. Okay, it's a minimal component. Yeah, it's a minimal component. And you have the same community diagram for lambda Y was a one in which case this is over lambda. So this relies on the purity of the scalar representation attached to module points. And some variance of here as lemma or confluence. Because if you recall, X sigma is something like a determinant of ethyl homology. Y sigma is something leading in conflucent homology. And these are the tools used for If we can establish this first step, then notice that then it is enough to show that x sigma tensor y sigma contains tm sigma. And that there exists a point, a modular point, such that our very much from this modular point, so that it is the modular point x, is lambda y was alpha. Because if the containment was strict, then in this cognitive diagram, we would have a strict containment here. So once you know the compatibility of x sigma and y sigma, you know it is enough to prove the containment and inequality on one point. So we are going to bootstrap this equality to inequality at every point. But first we need to prove this containment, so that's the second step. So the second step is the containment x sigma tensor y sigma here contains tm sigma. And this is achieved by a total white system method. Meaning we construct a projective system of r-signal quotients of tm sigma whose limits is a regular ring. So for the benefit of experts, in the original Taylor-Wides formulation, this works only in the minimally ramified case. And in that case, Taylor and Wides construct such a system and show that the limit is a regular ring. But we all want so that would correspond to sigma, is equal to sigma rho bar. And so that explains the mind hypothesis on L in sigma minus sigma rho bar. In the general case, one needs to analyze the singularity of the local deformation ring at L in sigma, not in sigma rho bar, to get this result. So this is a Taylor-Wides system in Kissins formulation. In Kissins formulation, in general, you don't get the regular ring. And I really need a regular ring, so I need this analysis, and so I need the psychothesis. So the local ring is talking about a different from P. So exactly, I'm looking at the frame deformation ring of rho bar restricted to gql, 10 different from P. Of course you can be like there at L and so on, in that situation. Yes, yes, yes. Yes. So these frame deformation rings are never regular rings, but they are irreducible components, might be. And generally, if you look at this, this is Jack Schultz. Yes, yes, yes. But still, I think the singularity is a lot understood. Yeah, so you're in dimension two, sorry. I'm in dimension two. And in my case, as soon as there is one singularity in an irreducible component, the method of the dimension two. Yeah, so that's step two. And so what is the use of this regular ring? It's because now that I'm over a regular ring, I can apply the Euler system method. So the regular ring I will call B, B, B, T, B. And this way, this produces containment for many points of B infinity. So you can find many points S and make sense of this X, S, and Y, S such that you have such a containment and then you can deduce all that containment. And then you just appeal to ICCS4. And ICCS4 tells you that there is one point at which they coincide to a record inside everyone. I'm done, thank you. Thank you. Thank you. We will need to start by Tokyo, then Beijing, and then Paris. Any questions? So at the last point, why you need to do gravity? In the US, our theory, so Carl Rubin observed that if you have, so you want to prove a main conjecture, so it's an equality of characteristic ideals. And Carl Rubin observed that if the characteristic ideals are, or maybe a divisibility of characteristic, he observed that if this does not hold, then you can find a discrete variation ring for which it is very false. And that will yield a new prediction. But if you are over a non-regular ring, then possibly something that is, there is no divisibility for the ring, but there is a divisibility at any image to a discrete variation ring because to just pass to normalization, so maybe the non-divisibility is destroyed by normalization. So I mean, phrased in these terms, maybe this does not hold at any point because at the point of singularity of this ring, this sees to hold. And the whole system method will not see this over T and sigma. But if I go to B infinity, then I resolve the singularities so I can really detect that. So that's what's happening. Is there more questions? Yeah, thank you. That's it from Tokyo. Question from Beijing. I have a very probably stupid question. So in your step, wait, can you hear me? I think you can. Okay, so in your step one, you said, you need only to show one point to deduce, you need to show conjecture one at one point to deduce it for all the points. Yes. I would believe that if you need to show one point on each irreducible component, is it true that you just need to show one point on over all the whole space? How do you come and pass through the intersection and go jump to the other? Yeah, so I can repeat the question maybe. So the question was, I claim in the theorem that there is equality at one point, then there is equality everywhere. And the question was, I can believe that there is equality at one point, there is equality on the irreducible component containing these points. But how to pass from one irreducible component to the other? And that's an excellent question, so that's precisely the point. So I'm sorry I need to write down something to answer this question. So in terms of fundamental lines, this is a statement saying, so we have some xA and so yA and these contains our A. And then there is a point x such that there is equality. And so we deduce equality on the irreducible component. So now how do we pass from this irreducible component to the full space? And the point is that x sigma and so y sigma maps isomorphically to this module. And I really have a containment on that space as well. But this is a highly non-ampliouous statement, because as I mentioned but probably too briefly, it's not true that either of these maps map to these parts. But the way the error term can solve. So a more conceptual way to save is to answer the question, how do you pass the irreducible component? It's to think of the local Langland's correspondence. So it's very easy to interpret the local Langland's correspondence on an irreducible component. But thanks to the result of M. Helm and his co-author, we know that we can actually interpolate the local Langland's correspondence on the full space. And that's exactly what I do. So that's exactly what I make use of. So at the point of intersection, there is nevertheless another factor, which is not defined as a determinant. And that's how I pass from one component to another component. And this is hidden in the other one. So you need each component to intersect with another? So the function is connected, right? Oh, it's connected. So you need to go through intersections to one component. Well, I mean, if I were really moving in a bottom-up way, I guess so. But in fact, you can reverse the direction and pull directly the result. I mean, you construct your automorphic representation over the full space and then you specialize. But part of it is that you can specialize and put into an intersection. And at point of intersection, it does interpolate in a non-obvious way, but it does. So that's interpolation, by the way. It's only known for J2 and J3. Yes, yes, yes. But you are in J2. Yeah, so, yeah. Okay, other questions from Beijing? Any questions? Okay, that'd be all for Beijing. Okay, any questions, any questions? Okay, so if not, let's thank the speaker again. Thank you.
Séminaire de Géométrie Arithmétique Paris-Pékin-Tokyo / Mercredi 17 mai 2017
10.5446/54723 (DOI)
Thank you for the introduction and thanks to the organizer for the invitation, which is quite an honor to me. I'm sorry that my talk will consist only of a simple exercise and I should say an exercise still in progress. I will discuss duality for vanishing cycles in ethylchromology for all torsion coefficients. Well, this is not really a new topic, but this is one, I think, in which there are still things to understand and even enjoy. Let me recall the story. I will work with strictly local tri-S, with the usual mutations, S, the closed point, eta, the generic point, etta-bar, separable clover of eta, i, the inner shagot. I will take a point number L different from the characteristic of S and I will work with coefficients lambda, z to n mu z, for mu at least one. Now, if x is a scheme of finite type over S, the function of nearby cycles is defined as follows. We have the special fiber, ixl in x, the generic one, and here we have the pullback of eta to eta-bar, and here j-bar. So, this is for a complex L in the plus of x lambda of psi L. The definition is alpha star, j-bar over star of L restricted to x eta-bar. So, this is a complex on the special fiber axis of lambda of i mu z. And for m in the plus of x lambda of phi of m is defined as the cone of i plus star m to the upside of m, and also m in the plus of x s lambda. In the mid-seventies, the need for that on the upside says a dbc to dbc. So, dbc means the full category of complexes which are with the h i are bounded and consist of constructible community sheets. In the early eighties, Gabber, who had the compatibility of upside with duality, to express the result, it's convenient here to work with a slight modification of the capital psi and capital phi. Namely, I put psi L as i lower star of psi L, and I shift by minus 1. So, now this is an object in d plus of x and lambda i concentrated on x s. And similarly, I put phi L as i lower star of phi L, shifted by minus 1. And to discuss duality, I need a dualizing complex. I define ks to be lambda s of 1 of 2. I could take lambda s, but this is more convenient. And if ax is the map from x to s, I put kx is a plus of ks. And I will use the dualizing complex, d r home over x r home underline and then kx. And if I have a complex over x eta, I still take the r home with value in the kx restricted to x eta. Gabber's result is a canonical isomorphism between psi dL of minus 1 and d psi l in dc of x eta lambda. There is an account of this in the cellular and the periodic period. Now, so I think you put this around 1982 something like this. Combine with the fact that psi is right t exact, it gives that psi is t exact. And in particular, transforms perversives on x into perversives on x. Gabber also proved that phi sends perversives on x to perversives on x. It's not a trivial consequence of the first result, a sort of delissage. And that was the question whether phi is in fact t exact. Or in other words, if phi commutes with reality up to some twist perhaps. And 1986, no, no, yes, 1986, Bensson sketch a proof of this or at least a method which in principle could get a proof. Using so called maximal, what he called is maximal extension center. Details were never written up. A few years later, Morico Saito transposed the Bensson's method in the context of topological spaces and duality. And also for regular, autonomic D modules with certain smoothness assumptions. It turns out that last year there was a conference in Montpellier and Bensson was there. And I asked him about duality for phi. He said, oh, this is very easy, very simple. And he explained to me a simplified version of his maximal extension function which is maybe not so easy to understand in his paper. His paper is how to glue perversives. This is the method I will discuss here. With a further simplification, I obtained quite recently actually in collaboration with Wet-O-Tongue. So this was the first part, somehow some kind of a swift historical sketch. Let me turn now to duality for phi. Consider the maximal elting character, i to 0, 1. I will turn a p prime, which is a pro-group of order from 1 to n. If f is an l module, I do it by f t for l-tim part. So this is the first part of f under p prime. So this is an ls0 of one shift. The filter taking invariance under p prime is exact. This is the image of the projector, kappa, where kappa is one of the short order of p prime, some of g in p prime. Of course, for any fixed action continuous action, you first pass to a finite quotient through which i p acts. Then we have a decomposition f is f-tim, f-t plus f-t, which is the image of one of these kappa. In particular, we have decomposition psi, psi-t plus psi-nt. This pass to the derived category. Similarly for phi, i upper star m is just this l-tim part, because of course p acts trivial. Then we have phi is phi-t, phi-t plus psi-nt. Now you want duality for phi, but you have duality for psi-nt because you have duality for psi by Gabler. Duality for phi is used to duality for phi-t. Now psi-t has a description which is similar to that of psi. Namely, replacing eta bar by eta-tim, the maximal l-tim extension, to xs, using x, and we have here x eta, gi. Instead of x eta bar, we look at x eta-tim here, which is obtained by pullback from eta to eta-tim, which is the limit of eta of phi in l minus l for phi, in the form of a parameter. So let me do my Q projection here. And psi-t for l in v plus of x eta, psi-t of l, is i lower star of rg lower star. So in the sequel for brevity, we don't need the r before the infront of the derived function, rg t lower star, of l restricted to x eta t, phi-t minus 1. So by PNs here for this Cartesian square, this is also i lower star of j lower star of j tensor l, where j is the direct image of lambda. So this j is of course the ZL of 1 lambda module, a continuous one. So it's in fact a module over the ua-sevron lambda double bracket ZL of 1, which is isomorphic lambda double bracket t, if t is 1 minus sigma, sigma topological generator of ZL of 1. And this is in d plus of x and r. And similarly, phi is also in d plus of x, r. You want to write up a star? I want to r-fight a function. I don't know, sorry. Sorry, it's fine. So phi t of m, I don't know so in d plus of r. Now, are you a pro star? Yes, thank you. Thank you. Okay. Now, why the notation g? Because in fact, j will play the role of an infinite Jordan block. This is a torsion r module, and in fact its torque at eta t is in fact r, t minus 1 over r. So this eta, this eta, this big eta-sheep with the big monotromy action, is a key actor in this story. Now, in order to state duality theorem for phi t, we need some notation r is a lambda algebra, but it's an augmented lambda algebra. And following the Baylinson notation, I will denote by r of 1, 2 tau the augmentation ideal. So if I have chosen t, this is the principal ideal, in fact, isomorphic to t r. But I don't want to choose a topological generator, sigma, or z of 1, and not t. But this is an invertible r module, and so I can consider its tensor power, and I can denote, calling Baylinson again, r and tau, r of 1 tau, tau tensor n, for n in z. Of course, if I choose t, then I have an atheromorphism, and I can write it as t of 1, 2, t of n. Now, this rn form a sequence for n larger than n, and in particular, we have r contained in r of minus 1 tau. Now, if f is an r module, I can define the US power twist, f of n tau, as f tensor over r of n tau. This construction passes through complexes, and in particular here, this inclusion induces a morphism from f to f of minus 1 tau, which is an identity tensor of this inclusion, which is in general no longer an inclusion, and which is convenient to denote by beta. This is again Baylinson's notation, and this beta is some kind of monotomy operator. It's because if you compose here with the isomorphism with f given by multiplication by t, then the beta becomes multiplication by t, and t is 1 minus sigma, so it's some kind of monotomy. Now, the main theorem is the following. The theorem 1, for m in dbc of x lambda, there exists, which means we will construct a canonical morphism, 5p dm, so there are two twists, the US power twist and the tape twist here. Now, the corollary of this, of course, is that you get duality for 5, so you get d phi m, isomorphic to d phi t, so phi t dm of 1 tau, minus 1 plus psi nt dm, restricted to eta of minus 1, and here, non-canonically, by which I need the choice of t, then you can trivialize this and then you get phi t dm of minus 1 plus psi nt dm eta, so in fact, you get phi dm of minus 1, so you get that, phi is t exact, and you recover, get those results, that phi preserves perversely. The proof of the theorem uses a Bellinson description of phi by means of this maximum extension counter, which gives a sort of a self-dual description of phi. Certainly, the description of the cone of a I over star m into psi is certainly asymmetric when you dualize the psi, dualize, but the I over star is replaced by some I over streak and you don't see anything non-duality, so that was the puzzle. And here is the solution, phi t and Bellinson's side. I will use the following notation, if a to b is a morphism of complexes, I denote like a Dringfeld does in some paper, by a cone of a to b, and a cone of a to b shifted by minus 1. So, that is the reason, any morphism. Now, I will come to the beta here. If you look at beta for f equal to j, you get a suggestion of the kernel is lambda. You can see that I am taking the stock at a t looking at the 1 minus sigma into t. Then, if l is in the d plus of x beta lambda, then from this you get j shrink l is the cocoon of j shrink l to j shrink j lower shrink l of minus 1 tau. You can tensor with l and you get a triangle. Similarly, j lower star, by which I mean our j lower star is the cocoon of j lower star l to j lower star l of minus 1 tau. You forgot some of the projections. Oh, sorry, I forgot the j. Oh, yes, sorry. You get j lower shrink l is the cocoon of j lower shrink j lower star l to j lower shrink l of minus 1 tau. Similarly, for j lower star l, same. Now, what is the upside theme? The upside theme. So, the definition is here. This is a higher percent of j lower star, but we push by a lower star. It is a shift from minus 1. So, actually, this side t can be written as just the cocoon of j lower shrink of j and so l to j lower star of j and so l. There is no twist here. Now, we have several formulas here and we can assemble them into some diagram. Consider j lower star j and so l, j lower star j and so l minus 1 tau. And the similar thing with j lower shrink. So, your residential maps are better and vertical maps are canonical maps from j lower shrink to j lower star. And I have, this is a community diagram and I can look at gamma, the diagonal map. This diagram can be in fact lifted as a community diagram of complexes by replacing l by, let's say, a bit more resolution. So, the benign definition of the xy-factor is xy l, the cocoon of gamma from j lower shrink and j lower shrink to j lower star and j lower shrink to minus 1 tau. So, here, this diagram gives you two community triangles. The community triangle gives you an octahedron and the lower octahedron here gives you a triangle on the cocoons. And you get here a distinguished triangle. So, what is the cocoon here? This is j lower shrink of l, goes to psi of l. So, this is, I forgot to say that this is what we call the maximal extension center. So, it goes, by the remark I made, it goes from v plus of x eta l to v plus of x r. So, we get j lower shrink psi. And here we get the cocoon of j lower shrink to j lower star, which is psi, but twisted a layer of l by minus 1 tau. And the upper triangle gives you sequence here. Here you have the psi, psi t l goes to psi goes to, here this is the j lower star. And these triangles show that psi sends dbc to dbc. And if l is perverse, then j lower shrink and j lower star are also perverse, and then psi is perverse. Note that even for x equal to eta, equal to s, and l, the constantive lambda on the eta, the psi of lambda is not a trivial object. It is given by sequence by triangle 1, where psi t of l, psi t of lambda is just lambda shifted by minus 1, concentrated on the closed point. And then you see that psi lambda eta, in fact, is given by the minimum in x2 of this lambda, concentrated on the closed point and the j lower shrink lambda. So this is the class C in h2, always supporting s of s, so lambda of 1, which is C inside the class, from the class of the point. And contrary to psi, which I do monodromy, this psi has some monodromy. In general, the image of beta from the psi l to 1 tau to psi l, for l perverse, is given by the composition here, you go psi l of 1 tau to psi, and then psi to psi l, and you just get psi t of l. So, sorry, what is h2 as supporting s? Yes. Now, why is this interesting? It is because psi is related to psi. Consider the following diagram, tic n, d plus of x lambda, and consider n goes to j lower star n, restricted to eta, and here, j lower shrink n, restricted to eta. And then, j lower shrink n eta goes into psi of n eta. And in two ways, either psi n eta goes to j lower star n eta. And the composition is a canonical map from t shrink to j star. So, I have a commutative diagram, which again can be lifted to commutative-diagonal complexes. Let me denote this by b of n. Commutative-diagonal complexes, I can see it as a bi-complex of complexes concentrated in degree. So, I will put n in degree 00, so it will be in degree 01 times minus from 0. And of course, I can take the simple associated complex to this, and the result due to Benenson, is that the simple associated complex is nothing but phi t, you have. And the proof is immediate, essentially. You replace this diagram by n, j star, j tensor, m eta, j star, j m eta minus from tau. So, you have j shrink here, j tensor, m eta. Somehow, you display the psi in the j lower star using this expression I gave before. So, here, this. So, you have the bi-complex. And of course, here you can, so this is essentially equivalent to that. So, this is, of course, a cyclic. So, to calculate the simple associated complex, you can ignore this, and you look at the corner here. So, here, you get i of star m, and here you get phi of psi tm, this is j lower shrink with j lower star. And this is the parent degree zero one, so this is the phi. So, this is just an obvious observation. Now, the main result is the following. The main point in the ingredient to prove theorem one is theorem two. There exists theorem two for L in the DC of s eta lambda. There exists a canonical factorial isomorphism. So, psi dL of one torque minus one to d of psi L. Compatible in the obvious sense with triangles one to n d and psi c dL, as you know, phi 2d of psi 2. That theorem two implies theorem one is easy. You see, look at the picture here. So, dualize, then psi is a dual of two twists. J star is transformed into j shrink, j shrink is transformed into j star, and n transformed into dm. So, if you define d minus of m by the square, you put here psi m eta, here j lower shrink m eta. Maybe I will take n to change notation for m in d plus of x lambda. So, this is a similar square here, j lower star of n, and here you have psi of m eta. So, this will be now in degree, I will still put, no, sorry, m here, I will still put m in degree zero zero, so this will be now in degree minus one zero times zero one. And you see that the d of b of m, note before I do that, note that again, s b minus of n is a phi t of m. Now, dual exchanges the b and the b minus, the d of b of m, m is b minus of b m, and then you get the result, d x i. So, in fact, you get, here is not exactly this, I should put here one star, and minus one. Here you might be surprised because at some point I put double twists and other places I don't put any. For example, at m, I take the dual and I don't put any twist. The reason is that if f has trivial zl of one action, then t twist equals, it was our twist. In fact, you have that our one star, the body law of two stars, it's just under one. So, f of n star is f of n, if trivial, minus zl of one action. So, then you get that and then you get the formula for x i d. Now, observe that we have the canonical sequence, psi t m eta to phi t m i lower star i upper star m. And this map is sometimes called the canonical map. And you have a similar sequence, a similar triangle here. I lower star i upper streak of m going to phi t of m one star, going to phi t of m eta. Where here the map is a variation map. So, the beta, the monodromy goes from side to side. The variation is a factorization of that, which is the cocoon of zero and beta. Now, these are, I don't have time to explain it, but it's very easy to recover these triangles from the description here of phi as a simple complex associated to b. You have a double complex. So, we have two filtrations, night filtration by first degree and night filtration by second degree. The night filtration by third degree will give you this sequence. Second degree will give you this one. Now, d exchanges b and b minus and exchanges first and second filtration. So, then d exchanges these two sequences, these two triangles. Now, let me give an idea of the proof of the zero and two. Which is in fact quite easy. The core of the matter is in this infinite-dromed block J. So, recall that J is q lower star of lambda, where q is suggestion from eta t into eta. So, J is an objective limit of Jn, where Jn is q and lower star lambda, where here this is eta of phi, q minus n, projection qn. And also note that Jn is canonically dual, value in lambda, by the trace pairing. So, the inductive system Jn is also, gives also a projective system, where the map from Jn plus 1 to Jn is a trace map. Now, the mean lemma, which is the rest, we still joined with the white chart. I should also acknowledge a very fruitful discussion with the author last week about these questions. Lemma, which is, for me it was an extraordinary surprise. But this is very simple anyway. So, what is our lambda or J to lambda? So, J is an inductive limit, an inductive limit. So, you think it will be a projective limit, but no, it's again an inductive limit. So, this is J of minus 1 and minus 1 canonically. So, if you like, this can also be written as R lemma or Jn. So, this is a query that's a projective limit is an inductive limit. So, the reason is that, yes, I understand. How do you calculate this R lemma? You calculate the stock at a T. Well, that will be an inductive limit. Here comes an inductive limit of N. A projective limit, an R lemma, actually, of R lemma, theta N and Jn. Now, a simple calculation shows that the po object limit, M of h0 theta N and Jn, is in fact 0. So, it's the projective system essentially 0 and even it is an ordinary 0. And for the other guy, h1. So, it's 0 and h1. So, theta N is the quotient by L to the N, the other one. So, h0 and h1 are invariants and co-invariant. So, h1, theta N, co-invariant with the shift, there is a twist of minus 1. And in fact, essentially comes some thought that you are L, minus 1. And then you apply the limit and that's finished. So, then you get a pairing, perfect pairing. Now, if y is a scheme over eta of finite type. And you take L in the plus of y, longer. You get a pairing J tensor L, tensor J tensor dL into k1, 1, 1. Pairing which I will denote by star. And you say L and N, you say L and dL. The last line. The last line is the variable dual. Oh, sorry. Thank you. You said it. Thank you very much. Yes, of course. Okay. Thank you. And if x is a scheme over s, for L, and d plus of x, eta, and longer, you get a pairing J-dual-shrick over J tensor L, tensor J-dual-star of J tensor dL into k. First in J-dual-shrick of k, then in fact in k of 1 of 1. Thanks. Let me call it double star. So, you have C of x3 in that for L in dvc of y or etc. Then star and double star are perfect. That is identify each of the factors of the dual of the other. So, you have to do it like this. First of all, you put star is perfect. Well, this is easy actually. My simple dv size in fact you reduce to y equal to eta. And then this is the result. This is the key number. So, the double star says two things. A, that J-dual-star of J tensor dL of minus 1 to dJ is the lower-shrick of J tensor L in myomorphism. And B, that J-dual-shrick. Now, changing L to dL, the J tensor dL minus 1 to minus 1 to dJ-dual-star of J tensor L is an isomorphism. Now, since star is perfect, the trivial duality that dJ-dual-shrick is J-dual-star of B gives you 8 star plus dJ-dual-shrick of J-dual-star of B gives you 8 to dL-shrick of B. So, B combined with Gavir's size TdL minus 1 to dL. In fact, what is psi T? It's the cocoon of J-dual-shrick of J tensor L to J-dual-star of J tensor L. So, when you take the dPsi T, then already one of the d you know. So, the dJ-dual-shrick you know already. So, then it remains this guy, but you know for the cocoon. So, then you go for B. So, this gives you B. We have to some compatibility. You have to some compatibility. We use in compatibility with eta actually. When you look at the way the map is defined, we use trace maps and then we use eta. But surely you have to check this compatibility. So, this finishes the proof of theorem 3. So, once you get of course theorem 3, theorem 3 implies xid 1 to minus 1 is called the xi. Because the xi is also a cocoon. A cocoon of J-dual-shrick to J-dual-star, but with a twist. You know both terms and then you know for the cocoon. So, this is immediate. So, this is finished. Now, let me, sorry, I think I have learned two minutes. There is one question that Le Mans raised about this kilema. How about replacing eta-tame by eta-bar? Would it still hold? So, it's conceivable, but still we don't know. The P prime is a little complicated and we don't know. Also Le Mans asked for J-replaced by other theme. So, for example, you take a finite field and it's algebraic closure. You take the projection and take a direct image of lambda. You have that, the dual is the shift itself shifted by minus 1. We don't know. In this case, no twist, no T-twin should achieve that. What are you taking into account? So, suppose you take now Q, no, sorry, F, spec, F-bar, you take F-clue. And so, take J, now take J or lambda. So, is our home of J-vanda J minus 1? No twist, because you take the ZL quotient, the maximal ZL extension, not ZL or 1 extension. So, then there is no twist. It's plausible, but we don't know. I think it should be like this. Well, okay. The Galois, okay. It should be the same. Now, also, one remark on the proof. You see, this duality is very bizarre. If you consider the poetyl site of the eta, you have the map nu. Now, you can consider the projecting limit of the nu-pasta of GM. So, this is the least shift of arm modules on the poetyl site, which is in fact an algebra, and it's free of rank 1. So, let me define J check. So, it is not 0 here, but somehow the formula of lemma is equivalent to saying that our nu-star of J-strike is torsion, is our torsion. And one can ask for J-Washwick replaced maybe by constructable complexes of J-strike modules, or J-check modules. Now, the applications are with just, so, 6 or 5 applications. So, the main application is that to local acyclicity. So, local acyclicity means the phi is 0, so phi is the dual, so then DLA is LA. So, then you get Takashi-Saito's result on the stability of singular support and characteristic cycle by duality. So, SSD, SDS, NCCD is DCC. SSD is SS, it's too fast. SS of DF is SS of F, and CC of DF is CC of F. Here it could be for x, a smooth over k and F in the CTF of x. So, you have this theory of singular support and characteristic cycle, but these objects are controlled by phi. And then, since phi is a dual, then you get that. Now, I heard that offer can generalize all this very simply to general basis. So, deep-side T is the TD, and also for phi T, or the S generally. And then, from this, it should follow that LA is also ULA and DLA is LA. So, this is the cryptic, so local acyclicity. So, LA equal to ULA means universal local acyclicity, that is local acyclicity of the base chain. And DLA equal LA over S, regular, excellent, etc. So, the dual of the local acyclicity is also. But, well, this is just maybe some sort of open at the moment. At least in the smooth case, but my time is over, so I didn't hear it out. So, this will further study, I think. Thank you very much. Thank you. So, maybe we start by questions from Tokyo, then Beijing, then Paris. Hello. Tokyo? Yeah. So, this last result you mentioned in your work, what is this sub T? Well, so I think, I think he hopes, some theory of team, team nearby cycles. Yeah. No, anyway, the, the, the, so anyway, the second would be LA equal ULA. So, using all those results, one is to, so there is an alteration or modification to the vanishing cycle, the lino-monde vanishing cycle would be as well, and then you can use test curves, so to speak, to, to control the enough, you have enough to control the, some constructible shifts of the, we entered the product of, was using it, test curves. So, you reduce to, so, and then the, the, so I'm not really claiming to, well, I did not study the, so anyway, maybe discussion is a little bit, because I did not actually. But I think Takeshi's question was about the PsyT, PsyT. So, you, I think you told me that you, you would, you were hoping for a theory of the team nearby cycles, over a general basis. Is it right? Maybe this is a mixture of something from different times and something. And so, possibly, but the problem with the Oriental Topaz is that here you, you would have to, to take suitable stratifications and, well, you have also probably to, to find substitutes for the XI fronter. So, certainly you have to do something like that, if you want to get to that. There are notions of tameness for Shies, but I'm not really claiming, at the moment I'm not, no, I'm not really claiming this. What I'm saying is that there should be a good behavior for duality and perverse filtration. So that the idea is to, to do it over any S by some comological descent, reducing stress, I mean, there are some, so it's kind of, but not, but of course, if you have a situation with good behavioral vanishing cycles, then because you know it for test care, some results, you should get something for the wall-seeing, but this is not, well, probably you can get something, but this is not really worked out. So, it's not, sorry for, sorry for, yeah. I have a question on the name. So, so this factor XI, it's called maximal extension factor. So, why is it called maximal extension factor? So, take eta, so take lambda and eta. So, you have a several extensions. So, the J-Row star, which is just lambda somehow, and then you have also J-Row star, which is just lambda. But in fact, the XI as a part is not only, as a part in degree one, in congealed degree one, in degree zero. So, it's not, it's a complex with the non-trivial acrylentho, it's bigger somehow than the J-Row star. So, in fact, it's some kind of maximal sub-jewel extension of this lambda. So, both J-Row star and XI are both sub-jewel extensions, but one is bigger than the other. And it seems that in a suitable sense, maybe a sense which is not completely clear to me, I should confess, this is the biggest one. Maybe the demodule viewpoint, it can be seen otherwise. Or, do you have some explanation for this maximal extension, other than it is somehow bigger, it has more stuff? Yeah, there was this old paper of Perinz von Jumenschen, so I remember, but I did not look at it recently, so I don't remember the words that you are just saying. But it's certainly bigger, it has more stuff, and more monodromes, also. So, what's the question? Thank you. So, you go to the gym, it's the two-person exercise. So, the old question. So, you started with strictly Hansel in DVR, what about the case of Hansel in DVR? Well, you asked me the question before, yes. So, now that I have those twists right, you see the 1, 2 and minus 1, so then I, of course, this is functorially, if you have some eta 0 and eta and take an action by Galvat, but by transportation structure, you transport one thing into the other, so then somehow you have a certain compatibility there. But you have to transport from eta to eta-part, eta-thing, eta-thing, then you transport the gene, you transport everything. So it is, in fact, certainly, so I could have neglected, of course, both T-twist and Iwasawa twist, but I didn't want to do so. So then I kept track of this T-twist, and then isomorphism I have is completely func-torial if I have a nice isomorphism from something on eta-zero and two things on the pullback on eta and I take an isomorphism that is compatible, so there is no problem. So in fact, that is also in some arithmetic situation that then you can take the graded for monotony filtration and then no twist, no Iwasawa twist appear there, and then you get that D or graded for phi and for psi. Thank you. Thank you. There's no more questions from Faijin. Okay. So any questions in this? Yes. Just to precise what method you're using. So in the case of this duality with this J, large thing for the scheme of the finite type over X and stuff. So in the technique, do you use some modulation to reduce? No, no, no, it's simply you reduce something smooth and then you use trivial duality. So you use a global duality assumption like this. And F over 3D is on. So this is a trivial thing. So you don't need that. No, you don't need any alteration. Okay, so you... So this is a trivial divisage. Because you check your clarity, you checked it just for eta and then you have to... No, no, no, of course you have one Y. So then you reduce to this J, large thing of something locally constant and then image on something and look at it constantly smooth and then you reduce to in fact something smooth and then... Okay. So when it smooths and locally constant, then you see the D of your star is... Okay. So you have a small size of a per shriek and then you use a junction and that's finished. You reduce to eta. No, no, there's nothing deep here. But you're right that for the B, maybe the B here, B of course is difficult. And when in my previous approach, I proved the... I didn't prove B, but I used the poet outside and then I could get that but assuming psit is constructible. So actually it's almost as difficult to prove that psit is constructible as it's compatible with D. So it's not so great. And then if you... the interesting thing would be really to prove for other... As you suggested actually for torsion, for constructible J shriek complex to prove our torsion. The reason for this is that you have a sequence, 0, r1, say... Let me write c minus 1 so that we can understand what it is. And the dual in here. So you have... By using a little bit of water and the duality, you get this. And so a thing which are filled by this gives you the... And the morphism is like this and this with a shift. And so this gives you the torsion. So but to do that then you will need alterations and the visage I suppose. But also some non-trivial results on the poet outside which are not so comfortable. To do a... Or to prove generalization of this torsion thing. Other questions? Yes. I have a short one. Yes. So I suppose the sort of the same proof works in the complex analytic case. Morico Cytou gave a proof with Q coefficients. First of all, doesn't work with Z coefficients. And using... It's not exactly the same proof. You see, Bennington, the original idea, uses scattering method. So it's a big word. But it means that instead of J you take some truncation. So you have this infinite Jordan block. So you look at finite Jordan block and the dual. So but the dual is then there is a translation. And so somehow you can ignore this translation. So up to translation. This is the so-called scattering. And this is what Morico Cytou does. So the length n Jordan blocks you took at its dual. But then you will complex in degree minus n zero and you have to hear. Zero n and then using this. But it's not completely clear how to... So it has a size factor, of course. But not exactly what I defined here. There is no... No, it was our twist. There is no such a thing. So we'd like to have something over Z. But it's not at all clear how we can do it. Other questions? So if not, let's take the speaker again.
It was proved by Gabber in the early 1980's that R\Psi commutes with duality, and that R\Phi preserves perversity up to shift. It had been in the folklore since then that this last result was in fact a consequence of a finer one, namely the compatibility of R\Phi with duality. In this talk I'll give a proof of this, using a method explained to me by A. Beilinson.
10.5446/54497 (DOI)
So just before you start, Luis, on behalf of the OpenSuser project, I'd like to present you with a gift to further collaboration between GNU Health and OpenSuser. And for some reason people thought it was good that I gave it, since I work for a company that has the best interest in these things. So on behalf of the OpenSuser project, I'd like to present 10 Raspberry Pi's for the GNU Health project to further your great work. Thank you so much. I'm humble. This is what collaboration is all about. And without you guys, we wouldn't be here today. And the people all over the world wouldn't be benefiting from GNU Health and from OpenSuser and from the Raspberry Pi's. Thank you so much for what you did. Thank you, my friends. My pleasure. Thank you, Doug. Really, thank you so much. Thank you for the beers also. It's been wonderful being here. It's funny. I actually don't know what to talk because Axel pretty much said everything already. But I just tried to go through a little bit more of the philosophical concept of GNU Health and then a little bit about the technical part of it. First thing, we are celebrating 10 years, which is something really cool for a free software project. Let me see if we can get this one to work. Yeah. So very first things, what is GNU Health? Well, GNU Health is a social project. Okay. It has to do with social activism. It has to do with social medicine. It has some cool technology associated to it, but it's a social project and we cannot forget that because technology means nothing if we don't have social activism to go with it. We started 2006 Argentina and we were working in public education. We were going around rural areas, school, and installing GNU Linux boxes. And it was there where I got a point of saying, hey, these kids are working 10, 12 kilometers every day back and forth. They pretty much needed more shoes than computers. So it was something that was an eye-opener for me. I say, why don't we work on primary care? Why don't we use the technology to improve the lives of these kids and their families? And that was what triggered what is today GNU Health. We pretty much in GNU Solidarity have these two things. Okay. So we work on the technology on one side, but on the other side, we try to outreach and get together with other NGOs to discuss all the socioeconomic determinants of health and disease that we have suffering today. So that's why we do this yearly conference on the international workshop on e-health in emerging economies. It's not just technology. On the contrary. So I usually say that there are two types of miserables. Okay. This comes from Victor Hugo. Those who suffer misery, so now we have 20,000 children that die every single day in misery out of preventable diseases. These children die out of the human actions or lack of them. So when we talk about cholera, when we talk about prostitution, when we talk about child slavery, we are talking about social diseases. And if we think about these diseases, they amount for much more the casualties that we will have on the traditional biology based Western medicine. So someone that doesn't have enough food is sick. Someone that doesn't have enough family affection is sick. Someone that is forced to prostitution is also sick. And we have to take care of that. And that's why new health takes a lot of part on people before patients. And this is the other parts of the miserable. These are also miserable, but these are the ones who cause misery. Number one, the war. Many countries in Europe make bombs to be used in other countries. They create tragedy. And when these people have to flee their countries because they don't have other way around, we close the doors to them. So we have to do something also in this aspect and make sure we choose the right politicians to cut this macabre business of war. We all love animals, right? Don't we? Oh, this is such a beautiful animal. Why do we kill them? Why do we kill animals? We don't have to kill them. They are friends. They are not food. And today if we look at the farming industry, it's not only inhumane, but it's very unhealthy. And it's one of the first contributions to the global warming. So that's something that we also should take in consideration. Lots of cancers, lots of cardiovascular diseases are coming from eating animal-based food. So the idea with new health is we have to move from the standard reductionist traditional way of making medicine, which is I treat sick people. So it's reactive, right? We are not doing anything or we are not doing enough to keep a population healthy. We are just treating somebody that is sick and then we don't do much to avoid that person from getting sick again. So we have to move to this area here. Okay, this is the system of disease and this is the system of health, where now we have people before patients. Here we have a patient-centric, which is already a bad start because if I have a patient, means that I have someone that is not doing well. It would be much better if we have somebody that is well and we prevent him or her from being sick. And now we not only have biology, which is of course very important, but we also have the environment. As I said before, education, affection, nutrition, exercise. So we move or we include also the social and psychological aspects to the well-being of a person and by the result of a family and society. So this is new health. Okay, so new health has four main areas. The first one, as we said, we are going to work on those underlying components to make sure that we have the demographics. We know how people is doing, how people are living. So economic status, domiciliary units, institutions and so on. Then we move to the patient. This will be the typical medical record where we have the histories, prescriptions and so on. The next step will be we have to take care of the health institutions that we have. So we do the billing, we do finances, we take care of our pharmacies, lab information management systems, emergency, etc., etc., ambulances and so on. Finally we have to make sense of all the data that was collected in this transactional or operational part into epidemiology, into health campaigns, into civil records and so on. So this would be for the health authorities. I won't talk much about this because we are in a free software conference so we pretty much know what this is but we are free software, of course. We are free as in freedom. So we use things such as open street maps to georeferentiate and geolocate events or objects as domiciliary units, the health institution maps, the demographics that we were talking before. Before having somebody at our health institution we can take care of all of this and do prevention already, get the social workers go on to those areas and work with the families. It's modular so it depends really on what is the characteristics of your health institution. So depending on what you do you will install or not the different components that are packages this is just a short list of some of the packages that are currently officially in new health. Within the patient management we try to have something that is abarcative enough. So from there I can kind of have a control center and do actions like appointments or labs or imaging etc. Without losing focus on what are the critical patient information for example. We also integrated with CalDAF so you can have your calendar system for appointments and so on. It's funny because some people think that we only work on primary care. Primary care is by all means the main component of it and we must have it. You cannot scale up if you don't have the foundations of your health. But we also work with research institutions, we work with bioinformatics in the area of genetics. So new health is a framework that allows you to do social medicine but also a state of the art bioinformatics and genetics. This is a lens functionality where you can actually link it to your analyzers and your institution and get the results, everything in the same system. Integration with packs, some functionality from obstetrics and pediatrics. Billing so when the patient comes to your health institution you will be able to say hey this is what we have done and depending on the insurance and depending on the rules that you have on your country you might have to build them or not just telling them hey this is what we have done to you at any given moment through your state at that institution. Of course we need standards and we use industry standards to interoperate with other components of the health system. Disease groups, I don't really want to get much into this but it's quite important because if we think again about this concept of social medicine in the case of tuberculosis for example we know that it's a respiratory disease but also it's a notifiable disease. So any index case that we have of tuberculosis it will immediately send an alert to the Ministry of Health without the need of any further action from the health professionals. So it's a good way also to prevent outbreaks of diseases as TB, as dengue fever, as Ebola or whatever. We integrate GNU-PG so you can pretty much sign every single document that you have in your system from medical evaluations to death or birth certificates, prescription and so on. So pretty much you go into paperless medicine. Axel was talking about before this we need something that makes no sense that I as a physician in the Canary Islands, somebody from Barcelona or from Madrid comes to my office I have to start from zero. I don't have her medical record because Canary Islands uses one system, Barcelona uses the other system and in 2018 there is no reason, there is no technological reason not to have a unique ID and a unique medical record not only for my region or for my country but for the whole world. We should be able to have a unique ID that allows me to go from Spain to the Czech Republic and if a car runs me over and I hope not they will be able to see what is my history. They will be able to see what I am allergic to, penicillin or whatever and that is feasible that can be done, the technology is there and we go that way. So when somebody wants to implement it, well they will have the framework already to be able to do so. Analytics and data aggregation is key in health. We were talking before about genetics and bioinformatics but also in terms of population age, what is the incidence or prevalence of different type of health conditions, etc. In this case for example we can pinpoint what are the hotspots for violent injuries, violent injuries coming from a car accident, from a suicide attempt, from a sexual assault, whatever, you can georeferenciate all this and contextualize these issues across your region again and then of course take action on top of it because we do this so somebody will have somewhere along the line take action on top of it. Health campaigns, have vaccinations, see what is the status of the vaccination status of your pediatric community. And now, well again, Axel talked a bit about it so we have this new health in the box and not better said, new health in the box. So we have everything already there which allows you to work independently from internet connection, very good in rural areas, very good in the miscellaneous units, good at labs. Of course this is not done to put it on a hospital as the single database system is not the purpose but it's the purpose for personal health record labs, lab information systems and so on and it consumes very, very little energy so in these areas of Africa or Latin America where you don't really have enough energy, this system will work very well. New Health, this is a project ongoing project and we need the health of the younger people here because you younger guys are the ones who knows and you were born with a cell phone when your mom was breastfeeding you so you are the one who should be able to active collaborate in this type of projects. The camera, it allows you to link it with any USB device that has a camera so from histological samples to patient registration or personal registration of tomology, etc. The Federation, this is probably the largest and most exciting project that we have since we started. The Federation allows you to have multiple nodes on a distributed environment, independent, autonomous, heterogeneous system, different operating systems, different technology, but yet they will be able to aggregate information and be part of this Federation network or Federated Network. Of course thank you OpenSUSE because as I said, not only for what you've been doing with the raspies or also making things easier for the community at large. I work on making sure that we have a standard vanilla installation for whatever operating system that you want. Axel works and the OpenSUSE community works by getting what we have at the vanilla installation with the same functionality and just do an installation of a package itself, which is great. We of course have to keep on working on it, but I would say that OpenSUSE is today the distro of the community that is putting the biggest effort on the new health project. So thank you again for that and we are very happy. So being vegan and having a Giko as an ally is really cool also, so I feel part of the community in that sense too. I would like to stay here. We all know the four freedoms, right? But it's about collective freedom. It's not about, Axel talked a bit about also, this is not about my personal freedom and I forget about the rest of it. Because for that free software means nothing. You could actually have those four freedoms, but if you don't get into this positive feedback with the community and you don't give free software means nothing, it's about activism. It's about Lamarck and Darwin, all we had these two theories about what is evolution, where Darwin was saying, well, defeat this wing and that's the species that's going to be and Lamarck said no, evolution and progress is about collaboration and I think that free software is about collaboration. Competition is not good. Competition is not good, it just creates issues. Cells get together to form tissues and organs and it's because of this collaboration that we are humans. Why are we going to break the evolution chain? Why do I say, well, you know, all the work that the community has done, now I take it to myself and I don't share it anymore. That is being seen a lot today in the free software and open source community and we have to stop it because they are taking that sort of legal resource and saying, hey, you know, I'm abind by those four freedoms but at the end of the day it comes from what you do in your daily activities. So think about that, think about collaborating. Free society, it's about social activism and having a good government and having good citizens. This is just an enzyme. This is something that by itself it won't do nothing, it will accelerate once we have the right substrates otherwise it won't do anything about it. So finally, some little history about what are we working with. The academia is key. Where is in the Kuala Lumpur United Nations University or with the guys at MIT and Harvard in the States or Matanza's, Cuba, Italy or whatever, academia is key for us. It's where research is being done. It's where we need to have these guys working actively with our project. So anyone on the academics please contact us. We have this alliance of academic research institutions to actually foster the adoption of new health in the academic world. University of Entre Rios now by the end of June will be going there. They have a Latin American new health conference and they've been doing a wonderful job in terms of primary care in the region. In South Africa, this is a nursing school in Cameroon. The project about new health in Jamaica, Axel already talked a bit about it. Again Entre Rios, Laos. Our relationship with WHO is key also. We train the guys so now it's sustainable. Its local capacity has been built there so they can move on by themselves. This is one of our newest members right here, this little guy. Mexico, the Red Cross. Because this is where I was talking about the importance of the raspies. They work out of solar energy. So at night they work on the batteries and of course these things will take so much that it will be able to keep them running during the weekend or whatever. So this is a wonderful project. This is a small project but it really shows the social medicine aspect that we were talking before and how it can help. You know here pretty much you have HIV AIDS, malaria and tuberculosis. Those three MDG6 components and children are orphaned and they already have HIV with it. There is a lot of stigma associated to it. So it's really tough. For them it's really tough for the health professional working. So if we have something that we can track and better serve to improve the living conditions of these children, the whole project will have make sense already by now. Pakistan, he talked about it. And of course this is the latest member of the community. It's a huge project, largest hospital in Asia. We are very excited but we are also with a huge responsibility here. Not just as we as New Health at this point but as the free software community at large. This project will be a success if we all get involved in it. Otherwise it will be hard. So we have to work together. And I'm positive we will have the support from OpenSUSE to make sure that they choose the best technology, the best engineers at database level, operating system level and application level. And it will be a success. Now we are having a delegation from them in June to keep on being trained and so on. But it's a massive project. So I count with all of you guys here. Again different areas of collaboration. There is a lot of things from privacy and security, documentation and better devices, bioinformatics, LIMS. We talk about the mobile application, federation distributed, OpenStreetMabs, etc. So plenty of things to do from very different aspects of the project. Yeah, this is the new generation. Here is the little raspy and here is my child and he's working or pretending he's working. I think he's working actually with it. And yeah, it's part of parenting also pass on to the next generations and make sure that these type of projects live on throughout the years and they become part of the public health system. Because at the end of the day we are doing this because we need that the public health system and public administration at large use free software. There is a huge contradiction on the public administration when they use private software. It doesn't make sense. It's contradictory by definition. So I have public here and I have private there. It's by FASIC. They don't go together. They don't talk to each other well. And not only that. That money is going transnational companies somewhere. It's my tax money. I want that to remain in my country and for my people. Not for somebody that is already filthy rich. So remember public administration free software is the best way to go. Reach out. It's not just as I said before, it's not about technical. It's about passing this philosophy to the new kids. Put it in the high school. Talk to them. Tell them about the importance of free software as a philosophy. Come, join us. In November we are doing the third new health con and it's going to be the 11th international workshop on e-health and emerging economies. So it will be great having you guys there and spend some time in Gran Canaria which is also good and bring some ideas and have fun with us. And I love this sentence from Richard Virger which was the father of social medicine where he says, you know, medicine is a social science and politics is just medicine at a large scale and we have to think about it. Technology is great but it's just a tool. We use technology just to apply this. Technology by itself means nothing. On the contrary. On the contrary. Today we have on our western societies lots of MRI scans. For what? Many times you don't have the health professionals to do the MRI. And sometimes you have the MRI to detect something that could have been preventable. So medicine is not about over sophistication of the technology. Medicine is about keeping a society healthy from all the points that we were saying before. And we have to keep that in mind. And I think that the free software community with this concept of social medicine, they fit perfectly. They fit perfectly. And the money that you are spending on many others, you know, MRI state of the art scans, you could put it on prevention medicine and primary care. And telling people you shouldn't eat all these sugar things. And being, you know, lean and keep healthy habits. So with that I say thank you for coming and I hope you enjoy the presentation and get a better idea of what new health projects. Thank you very much. Thank you. Thank you.
GNU Health is a social project that provides a community-based, Free/Libre Health and Hospital Information System deployed in many countries around the globe. GNU Health combines Social Medicine and Primary healthcare principles with state of the art advances in bioinformatics and precision medicine, delivering a valuable framework for governments and Public Health institutions, as well as for academic and research organizations. In this presentation we will go through some of the existing and upcoming technologies behind GNU Health and their use in different scenarios. The GNU Health Federation to integrate large, heterogeneous health and research networks; The integration with OpenStreetMaps and the mobile application will be some of the topics. Finally, we will present the GNU Health embedded project, a joint effort with OpenSUSE, to use GNU Health in single-board devices such as the Raspberry Pi. We will go through the many benefits that this project brings to communities around the world, delivering Freedom and Equity in Healthcare, which is our ultimate goal.
10.5446/54498 (DOI)
Good afternoon. Thank you for being here. Thank you for indulging me to ramble for 45 minutes about law and legislation and even thank you for pretending to be interested in that. So before I start, I want to take a moment of especially my gratitude towards the organizers of this awesome conference as a conference organizer. Myself, I actually was project lead for OpenSuitor Conference 2015 in The Hague. I know how much blood, sweat and tears literally rose into this kind of thing. So first thing, if there is anyone from the organization here, then they will hear it and if not they will sit on video. Please give the organization of this conference a round of applause. And this also was a test to see if you are listening and away. Thank you. So let's move on then. So who am I? My name is Hans Teravath. It's not Robin. Sorry, that's a typo in my slides. Robin is a very good friend of mine. And we do presentations together. So that's why his name turns up here. I'm Hans. I've been working in IT and IT security for about 20 years by now. So yes, I'm that old. My targets of my target areas of business are primarily heavily regulated environments like government, medical and healthcare institutions. The type of business that whenever something goes wrong, either someone has to pay a lot of money or someone, and this I find personally much more scary, or someone gets hurt, because especially in healthcare, information, integrity, confidentiality and availability is literally what is the difference between, or can be the difference between life and death. But to keep the atmosphere a bit lighter than life and death, I also do other stuff. I organize classical concerts. I have a classic Mercedes Benz and I grow my own wine grapes, because everyone needs a hobby. But let's not ramble on too much about me. Why are we here? Why are we here in this room? There is a paradigm change going on in European law. We are moving from a construction where laws are typically descriptions of what you are not allowed to do towards models that outline general guidances of compliance and general methodology to implement measures to mitigate risks. Where does that come from? Well, we are in an era of change. Our current times no longer allow for law methods to be overly specific when it comes to certain technologies, when it comes to certain economic models, when it comes to basically anything that was an unmovable truth almost 20 years ago, by now has become moving. And especially as the world somehow appears to be bigger than Western or Eastern Europe, you need to implement laws and regulations that are not only culturally applicable within a certain continental realm, but can be used as a global universal type of methodology. So, but first let's make sure that we talk about the same thing here, because I'm going to talk about cyber stuff. So what is cyber? Well, cyber is literally nothing. Cyber is coming from the ancient Greek term, which means being able to basically control, push, or make something happen. So how does that relate to cyber crime, cyber sex, and other types of cybers that we have nowadays, held to all the cybers, it doesn't mean anything. What it does mean, however, is that somehow a lot of people seem to think that it actually means something. So because a lot of people think that it actually means something, I'm going to pretend that I am one of those people who actually think it means something, because else it would take me too much time to get out all of those cyber references and come up with another marketing term for this whole container thing. So where does this come from? Or at least where does the cyber security strategy that we now see developing, where does that come from? A couple of things. First of all, laws and technologies don't necessarily complement each other. Just like, well, maybe this is a bit of a stretch, but open source software and economic business models, they're also not entirely directly related. They tend to exist perfectly happy in each other's vicinity, yet being able to sell something that is free, kind of like is a contradiction in terms. What you need to do when you want to sell something that is free is you have to focus on the circumstance around that particular aspect. And then you have to come up with methodologies and procedures and with value added regulations to ensure that the component that you're selling even though it is free becomes economically attractive from a different perspective. And that is actually what the cyber security strategy is all about, because cyber security is a term, is a concept that is a meaningless, be undefinedable, and see, well, is there even a see necessary? But what is valuable is information. What is valuable is the raw data. From the raw data, we create information by correlating raw data into entities, relations, and whatnot. From information, we create knowledge. And based on this knowledge, we take decisions as a society. Believe it or not, even politicians tend to think that are influenced by actual knowledge. And one of the things that I see happening nowadays is that although this is maybe hard to believe nowadays, laws actually go above technology. And yes, of course, there is technological opportunity to keep certain information out of the realm of authorities, for instance, encrypted communication ensures that you are able to communicate with other people without government intercepting your communication. Yet still, if a law in a country decides or at least defines that at some point in the judicial system, you should be able, or the government should be able to break into your systems from surveillance to be damned, or that will happen. Luckily, we are in Europe, and Europe has at least since about 75 years ago a particular track record of keeping privacy or citizen privacy as one of its core competences. And I know when I'm saying that, and where I'm standing right now, 30 years ago, in this particular city, for instance, that type of liberty and that type of privacy was not yet taken for granted. So privacy and laws and technology are, well, let's say, interesting, not entirely compatible concepts yet. But you see is nowadays, I don't know, who has seen Mark Zuckerberg testify, or promote himself in front of the European Parliament recently, who has seen that, a couple of people, won't meet it. And so what I found interesting there is not necessarily the whole marketing show and the whole circus that was put up around that, but what I found interesting is the actual fact that the European Parliament pulled him over to question him. And what I also found interesting is the response of several of the parties in the European Parliament, as in that they have been profoundly dissatisfied with the level of answering that Zuckerberg provided, and that they will follow up. So that is interesting from a point of view that you now see politicians not only taking an interest, but also seemingly taking responsibility to gain the knowledge needed to actually understand the concept and do something with it. Well, that's the same thing that's happening, or has been happening actually on a cybersecurity realm since about the early 2000s, when we had the first more or less globally accepted certification evaluation scheme for IT security-related audits and assessments, the common criteria that was around 2000. That has been devised and updated in 2014. And that's basically what we're going to talk about now. But still what we see in the European Union, same as with the GDPR, before the GDPR countries regarded security and privacy information security as being a national shame, somehow with the premonition that the internet would stop at their borders and that parties wanting to exchange in international trade or international data sharing or exchange, would gladly implement geo-fencing or whatnot to do that. Well, that somehow massively didn't happen, and especially companies like Google or Facebook or the usual big suspects, they're not going to. You have to approach this as a continent, I would say. And what we see nowadays is on the cybersecurity approaches, there is a very nationally focused approach. So what will happen now is that the European Commission is drawing up a new regulation towards a harmonized model for information security, cybersecurity incident response, training, education and whatnot. But what I find the most interesting goal and objective in this is the one that I've outlined here, and especially the other one, is it is the explicit goal of this regulation to ensure transparency of cybersecurity assurance. And assurance means being able to validate and evaluate if a security measure or control has actually been implemented in any way that ensures its effectiveness. Because, well, assurance is without control. That's basically what the religious domain is all about. And that's, for instance, in the cybersecurity domain, stay out of the religious domain. Any other thing? And this I mentioned briefly before is we are going from a model where laws dictate what is forbidden to a model where laws create a compliance realm or compliance domain in which, as a company or individual, you have done to perform proper risk management and impact assessments to prove your compliance with that specific legal domain. And I would say that this is an enormous plus, because before we had compliance with IT security laws was being placed under the legal departments of companies. And, well, I mean, a lot of my friends are lawyers. Yes, they are. But their primary competence area doesn't necessarily lay in the IT realm. Their primary competence area is primarily in the business of explaining, no matter what is the reality, that they are somehow compliant to whatever alternative reality is being described by the law. So what happens now, and that is the same thing with the GDPR, for instance, is GDPR is not an IT law. GDPR is also not an legal or a compliance department law. GDPR requires you to implement information-revenant information models. It requires you to come up with personal information registration administration with interface descriptions and whatnot. It actually requires you as a business to implement proper information architecture. So how does information architecture governance relate to legal compliance, or at least to the lawyer side of legal compliance? Well, it doesn't, because it just doesn't. So there are areas of business in which we've been doing this for quite some time, especially in the health care realm. And when you look at medicine manufacturing, for instance, in the early 20th century, there were a lot of medicines against flu and against pneumonia. And essentially, what these medicines did was not exactly cure the disease, but they made you die very happy, because they were primarily made out of morphine and alcohol, which is awesome, which makes you really warm and fluffy, but still die. So what happened in the early 20th century in the health care realm was that they implemented laws requiring the so-called subsidiarity principle, which is a legal construct, which says whatever you claim to be effective towards something must actually be proven to be effective towards that something. So the medical industry went like shit. Now, we actually have to come up with something that works. And that took them about a hundred years. And then they started to implement medical devices to help out like pacemakers and insulin pumps and artificial lungs and stuff. And with those instruments, it also helps if they actually do what they're supposed to do and not necessarily create cardiac arrest and whatnot. But now we have this new reality with the networked environment in health care, which also requires pacemakers to be somewhat protected from, let's say, hacking, because what's more easy for an assassin? Do you think it's more easy to go into a brush or something from a kilometer's distance and have a big giant sniper rifle and then hoping that with the wind and the angles and everything, you're able to hit your target? Or does it look more easy to simply walk towards his house, have his Wi-Fi overload his insulin pump and kill him like that? Personally, I would prefer the latter. But this implements, or at least this requires, the implementation of risk management, which means that you first have to know what the risks are of the project that you are trying to defend, then come up with requirements to mitigate those risks, then implement those requirements, and then actually test if those implemented requirements are indeed effective towards what you're trying to protect. In the IT realm, this is something like the V-Model. There's different models, but this one is a very common part. So, like I said, data, general data protection regulation uses the same principle. Know what you have on the information side. Know whoever is using or processing the information. And if you know those things, it's very easy, at least it's more or less doable to provide the data owner with the right access and portability or to the amount of ratio. And the nice thing about the GDPR as well, and this is, I mean, there's all, if you hear this, then the whole power to the people, premonition of the GDPR, that might suggest that the GDPR is an entire company law. But it isn't, because companies now, for the first time in history, have one single go-to point regarding information, the personal information security-related stuff in the entire EU, instead of the 27 other pretty much disagreeing little parties, which implemented their own version of this reality. Thank God, you were out, because now it's one less, but still the model is not very scalable. So, what we're now coming up with in the EU cybersecurity realm is a EU wide competence network. And the competence networks' establishments, at first, are creating a common vocabulary. Because if you don't agree that you actually speak the same language, then it's very hard to even agree about anything else. And the same language doesn't necessarily mean the same linguistic language, like the spoken language, English or whatnot, but come up with definitions, regarding scope, targets, measures, applicability, and whatnot. And this is something that's actually interesting, because the individual computer emergency response agencies within the member states, they used to all have a different definition of what is an IT incident. An IT incident in the Netherlands, for instance, if you are aware that the Netherlands is basically this swampy area surrounded by water, which mainly lays around three meters below sea level. Actually, my house in the hail is four and a half meters below sea level. In the Netherlands, a major IT incident would be not being able to reach the water level management facilities in my area. Yet that type of IT-related incident might not be the same in, let's say, Austria in the Alps for reasons of not being underwater. So first, we have to come up with a level vocabulary. And then we have to upgrade. And I don't say updates, because we are taking a leap here. We need to upgrade security in education. The example given here, a BSB bit of crap, literally comes from an IT course from the use of a university for science in the Netherlands. This is not unique for the Netherlands. And any of you who reach BSB will know instantly what's wrong with it, other than I forgot a couple of quotes here and there. But those are my typos. But the general principle is, let's say, input validation. So in the past weeks, and this is really nice, the national news agencies in the Netherlands have had over 15 major news items about IT courses doing crap like this, which being an IT security teacher for several universities in the Netherlands, that makes my job security a lot better for now. But still, we have to update on this. So the focus for the next two years, which is coincidentally also the timeline for the implementation for the new EU Cybersecurity Act, will be to create at least some sense of security in education as well. So we have building block number one, create a common vocabulary, building block number two, make sure that we educate people to actually know what they're talking about, and even better, first we educate the teachers so that they are able to educate people to actually know what they are talking about. And then we go into certification. And the certification is into the concept of the whole transparency aspect. So how can you give people the tools to actually assess whether if the product that they are using is in some way fit for purpose, including the security question. So you come up with certification naturally, but certification has a pretty bad reputation, especially if you look at commercial ID certification realms like certified secure by Norton antivirus. This email is certified secure by Krasperovsky, yet the Russian secret service is listening over your shoulder, but we don't tell anyone still. So what is, you first then have to come up with what your process of certification or evaluation towards certification would then be. And luckily, the beauty about standards and norms is that you have so many to choose from. We have the ISO 50408, 15408, which is evaluation criteria for IT security, which happens to be the norm that basically the western world around 2000 adopted already as the common criteria or actually the 15408 is the norm which allows you to evaluate if some party has implemented those common criteria correctly. And it is actually both as useful as for developers as well as for customers. I'm sorry, I'm a terrible cough, so I could switch it off and on my phone to try and prevent you from being blasted away by the coffin. So if I at some point forget, please forgive me. So when you introduce a concept in law and then you say it's voluntary, that usually causes some kind of friction because voluntary laws, that kind of is a contradiction in terms. So what they've built in in this particular realm is that they have a comply or explain principle, which means that you either adopt this norm or you have to explain for every little teeny tiny detail which is in the bloody norm and those are a lot of pages why you don't comply and thereby making it effectively easier to do comply to the norm than to explain why you don't. So I'm going to go over the items of the ISO 15408, but I should indicate also that this is a norm that is free to download, free as in beer, although I wouldn't recommend reading the norm whilst drinking beer because it is quite a complex norm and you have like this the the bomb or optimum of intoxication versus information processing and that tends to go south quite easy. If you look at Windows Vista, we all know what that means. So I will go over the norm, but you are able to download it yourself from the ISO website and it is. If you're used to reading technical requirements, specification documentation, it's a surprisingly readable and practical norm. It's big. It is a couple of hundred pages, a four size information, but that is not because it's so complex. It's just because it's it covers an immense amount of topics. So I will scan the surface here so that you at least have some clue as to the scope and coverage of this norm. But I would highly recommend you to download at least part one, which is the general model and introduction. And that gives you the impression of the scope and your idea in which this norm is applicable. Because again, this will be a voluntary law soon. Yeah, so if you want to be able to market any kind of IT related system or IoT related system in any kind of regulated environment, that whole concept of voluntary may not be entirely applicable. So we have this concept called target of evaluation. That's the beginning. And the target of evaluation is not necessarily a single system. It is what we in the architecture domain refer to as an information domain. And an information domain is one or more systems or applications or parts of systems or applications that work with certain related information entities and are being used in a certain type of functionality. For instance, HR departments, if you would consider the HR process to be the target of evaluation, that would consist of both the HR registration with the names and addresses and dates of birth and whatnot. It would also include the financial process and information because for some reason, employees wants to be paid. So if at least they don't work for free or voluntary and you have a financial administration, you should consider that under the information domain HR. So I hope that's clear. Information domain does not mean single application. Sometimes it does. Sometimes it doesn't. But this makes the concept a bit hard from a vendor perspective. I would even say almost impossible from a vendor perspective to claim that your application, single application, is always compliant to a common criteria or ISO 15 for a rate. Because from my point of view, and I will give an example later on, the only type of application that's actually system that's actually suitable to be an information domain by itself is an operating system. Because that is a low-level, single purpose, mostly type of thing to ensure that it serves other things. And we'll go into that later. But for instance, Susie Linne, Susie Linne's enterprise server has been certified by the Bundesamt für Sicherheit und Information Technik, which is the German NSA. Susie Linne's enterprise server has been certified for ISO 15 for a rate purpose as being a target of evaluation by itself. So this is a concept you should just internalize and hope the list of whatever type of thing might be considered to be a target of evaluation. And a target of evaluation can also be a process. So, targets of evaluation are also combinations of different assets. And assets can be systems, documentation, personnel, even. And especially IT systems, as they are configurable, they might in some configurations not be suitable for usage in such a highly weather-related environment. If you don't implement proper security controls like SA Linne's or AppArmor, or to name just a couple of simple things, then it might be that that particular configuration is not applicable to be a target of evaluation. But still, just accept this as being reality. So the target audiences for the 15 for a rate are consumers, because consumers can define requirements, what they would need from a security perspective in a target of evaluation or a system. And that's called a protection profile. So a protection profile could be I have a firewall and that should prevent stuff from going in and out of my network without me explicitly telling anyone to do so. Developers on the other hand, they develop the security target, which is supposed to be protected by the protection profile. So system targets, the security targets, the system, protection profile, the requirements of anything you want to impose on that system. And then you have the evaluators, which have the front door poking around to see if the protection profile is actually sufficient for the target. And then you have the rest of the world. So part one, like I said, is actually quite readable. It goes over how to come up with security requirements. So the process of requirements, elicitation, specification, validation, how to create protection profiles and packages of profiles, because you can reuse profiles across different targets of evaluation, which have the same profile. I mean, if you create a profile for a firewall A, then you can probably reuse that profile for firewall B, and that's entirely legitimate. Chapter nine, or clause nine in isoterminology, is on how to handle evaluation results. And then we have a couple of annexes, which go into detail of what it is. So chapter seven, how to come up with requirements, how to define them, how to create a terminology, and use the terms appropriately for the requirements. Chapter eight, come up with profiles and packages. And here again, the example is a firewall. For instance, I have an IP table firewall, and that should do whatever every other firewall should do as well. So then I have a protection profile for a firewall, and the security target, in this instance, will be the IP tables firewall. And then we have the evaluated results. And the nice thing about that is the purpose of the common criteria is not only to evaluate, but also to communicate those results. So for instance, the SUSE validation, SUSE Linux Enterprise Server validation, that is the result of that. The report is online in its entirety. So you can access the report and see how SUSE prepared itself for this certification process. You can also see which parts they included in it and which parts they didn't. So the scope of part one is mainly background information. Part two is towards what kind of requirements do we have, and part three is what kind of assurances can we implement so that the requirements are met. So then you look at assets and environment. And anyone who's familiar with IT security, especially ISO 27001, will have heard about the concept of assets and environment, because an asset is something you want to protect. The environment is somewhere you want to protect the asset in, and then you create a risk profile of all the risks that the asset might be exposed to within the environment. An asset can be everything, even if I can be an asset if I want to be. As you see for my own company, I am an asset. The rest of the world might think differently, by the way, that's their freedom. So if you go into concept and relationship, an owner wants to protect assets, will take countermeasures to mitigate risks, a threat agent will try to impose those risks on the same asset, and that's how it all ties together. So the whole idea of the ISO 1548 is to demonstrate applicability, appropriateness, subsidiarity, if you will, effectiveness, to define the fitness, to demonstrate the fitness for purpose of the measures that you take on security. So because if something is not interesting for you to consider to be a risk, they don't have to do anything against it. So the whole process of risk assessment is not just coming up with as many hypothetical scenarios as possible in which something can go wrong. Now it must be in some way related to any reality that you feel yourself into. So evaluation gives you confidence that the countermeasures that you take are sufficient, are correct to mitigate the risk that is exposing your asset. So then we have part two, which goes into the functional components and the requirements. Again, I've just outlined the basic steps or the basic concepts here. Coming up with requirements towards a security target, creating a protection profile, and it will create what's called security functional requirements. If you are an IT architecture student or an IT student in general developer, then you will note that security requirements are mostly regarded as being non-functional requirements. For instance, the IEEE norm of software requirements specification explicitly names security requirements to be non-functional requirements. So what does the ISO 15 for right mean here? They mean that security functional requirements are basically the security access and authorization and authentication measures built into a system or into a target of evaluation. So not necessarily the security requirements on the physical environment or the operational environment of let's say a system administrator. I have to keep my copies of the installation media absolutely secret because else someone else will steal them. It means that this is whatever you can configure inside a system as being a security measure. That means usually authorization. So then you can come up with a policy. The policy is the part where you link the requirements to the actual procedure in which the system is used. Then you can bind them into a target of evaluation security functionality which is the matrix in which they tied together. So the objects of relevance here are usually users. Users want to access information. That information is being categorized in objects. A user has a session in which you will interact with the system and to facilitate that session there will be resources available like a server and network or whatnot to facilitate that. And then within those concepts you will have certain attributes. For instance this user has this profile, this security clearance. So therefore they may have this role in the system etc. This basically sounds like a simple explanation of role-based access control outlying because it's even much more. This model defines a distinction between security functionality data which is that side which consists of the user attributes, which consists of access control and information attributes and user data which basically is the data of the system itself. So the workflow processing data. So security related data is metadata and the rest is system data. So part two has the following chapters which requirements should be imposed on security audits. What can we audit on communications? Does cryptography play a role in there? How do we protect the actual data of the system? How do we go about identification and authentication? Multiple factor authentication for instance. What is our security management scheme? How do we go about privacy etc. etc. And then we have part three which first we create the requirements, the security requirements and part three of the norm goes into how can we then come up with assurances that provide the required level of certainty that we need. And well you have to think in formability management because the thing that you are trying to prevent are vulnerabilities from being exploited and vulnerabilities can come from a number of sources i.e. the requirements of a system have been so poorly described that the business process will not be able to be facilitated. That is a vulnerability. Development is performed poorly or the operations of a system are not implemented correctly and therefore leaving open all kinds of security holes. Formability management you can do on a certain level. You can either eliminate the vulnerability, minimize the impact of the vulnerability or monitor the vulnerability occurring so that you can then take action. Oh by the way I'm going through this quite fast but the slides will be online if anyone is interested. And then how do you go about approaching evaluation? Well you can simply analyze a process and procedure. You can do penetration testing. You can implement planning control cycles to come up with this. And then you determine what kind of assurance level that you need to adhere to from a certain system. For instance if you implement a healthcare related system from start to finish you need to know what's going on there. So you will need to come up with assurance requirements describing in detail all the different parts of the system. And you have to choose a level that is appropriate for the target of evaluation that you want to protect. And in the last part for instance let's say functioning tested means I have my senior user group walking through all the procedures as soon as I have a new release. Structurally tested means I add automated testing methodically tested and test. This is a formal process in which tests are being validated etc etc. In the norm there are several of these matrices going into this. I'm wrapping up due to time. This is the chapters in the clauses which are outlined in part three which give you input for that. And as open source community I think we should start preparing for this because this will be what will be required from users that want to use our systems in the nearby future. So we think about coming up with protection packages and protection profiles and maybe standardized community efforts to create such profiles because just as the GDPR free and open source software are not developed in a vacuum. They are developed in a virtualized environment but still they are. We need to be law abiding citizens. So communities we need to start because we can also benefit from this because being open source, being transparent means that we as part of our ecosystem can actually be completely and brutally honest about what we do and this is a bit plus for businesses that are actually be able to contribute back to us by means of those validation packages just like the end user now helps us with documentation and testing and whatnot. So if you have time look into the use case of SUSE. It's on the website of SUSE. They have a nice report of everything they've done to ascertain the certification and basically this is what we should do better as I just outlined and I want to end with a very happy picture because it is a very happy ending I would say and I'm even somewhat on time. So who's still awake? Sweet. I'm not sure if I have time for questions or if you're even still able to formulate some questions. Yes, who dares? I will take this as a sign that I have been absolutely clear on this topic and that you have been sufficiently informed. So thank you very much for your presence and awareness here and I look forward to having some beers with you because that is actually my next purposeful goal for today. All right, enjoy.
Or how to prevent the EU from becoming the worlds largest botnet honeypot Fibre to the home opens numerous interesting possibilities for both bona-fide and not so bona-fide use cases. Having your espresso machine or refrigerator being part of a multi-million device botnet which is attacking critical infrastructure might not necessarily be your first association when zipping your early morning caffeine fix. Not only might this notion be somewhat disruptive for your early morning zen-moment, you might also be held legally accountable for these actions as it is actually your home network participating in an international attack wreaking havoc on, let’s say, the healthcare information system of a close NATO ally. Nowadays there is zero quality control being enforced over internet connected devices in general. But the EU (and US) have decided this somewhat naive approach should come to an end. A new directive (NIS, Directive on the Security of Network and Information Systems) comes into effect. Especially for branches active in the development of internet connected devices with a direct application in the “quality of life improvement” domain, this will be something to look out for: Medical devices Automotive Domotica This new directive includes the ambition of implementing a certification scheme for IT systems and devices, this scheme will be based on the existing ISO 15408 standard: “ISO/IEC 15408-1:2009 establishes the general concepts and principles of IT security evaluation and specifies the general model of evaluation given by various parts of ISO/IEC 15408 which in its entirety is meant to be used as the basis for evaluation of security properties of IT products.” What does this standard encompass? What does open-source and free software have to do with this? Let’s have a closer look in this talk!
10.5446/54499 (DOI)
So, hello, good afternoon, and welcome to this session about transactional updates. Maybe let's start with a question into the audience before we begin. Who has actually read about transactional updates in one of those numerous channels now? I've read several posts on the OpenSUSE blog, and I've used about the transactional updates on news sites. So, yeah, who has actually read about transactional updates already? Okay, so that's a good portion of the audience. Maybe we can just skip some slides later then. What's the actual reason? The reason is probably that transactional updates was actually introduced in the form of the transactional server. It's been there for at least 15 now. We have had it for a time, but it's not the only place where you can get transactional updates. It originated from the Qubic project and the SusieCast platform. That's where it originally came from. There's two systems, both the transactional server and the Qubic system share one thing in common, namely, they have a read-only-root file system. So that's where transactional update is actually used. However, there's no reason to limit it to read-only-root file systems. Transactional update is just a regular package, which is part of Tumbleweed and Leap. If you want, you can also install it on a read-write file system, and I'll have some notes about that later in the talk. So what am I going to talk about? First of all, a small introduction into transactional updates. What's the architecture and the idea behind it? Maybe a short live demo on how to actually use it. Then you may have heard one of the... Is there a large echo here? Just... Okay. Just what I have in previous talks about transactional updates, what has changed since then, and the talk is called transactional update deep dive. So we'll have a deeper look at some of the mechanics behind it. Another question into the audience, who's actually packaging applications? Okay. Quite some people. The question is what do we have to do to actually be compatible with transactional updates? Get into that later. Sorry, I'll have to skip watching the alternatives from other distributions. We are not the only ones in that field. I just won't have any time to do that anymore. But we'll have a short look at what will be done next. So let's start with what is it and how does it actually work? So you may have read that slide yesterday in Richard's talk. Let's start with the definition of transactional updates. Transactional update is an update that is atomic. So that means it has to be either fully applied or not applied at all. The second part is that it doesn't influence the currently running system. So if something breaks, you certainly don't want to break your running system, but the update has to run somewhere in the background. There's a second criteria, which is the update has to be able to be rolled back. So if something fails, which you just noticed later after the update, you have to be able to get back to a snapshot where you know that you had a previous known good working state. So let's have a look at the current system. Currently, we are using snapper together with zipper. And if you have a look at it, we can see that we both, we get two snapshots, one snapshot before actually applying the system, before actually applying the update. And the second snapshot after the update was actually applied. The problem with this approach is that all the updates are done in the currently running system. So if something breaks, for example, if an RPM package cannot be applied, has some error in its post script or whatever, you have just actually broken your currently running system. So what you can do now is you can go back to a previous known work, a good known snapshot, but of course, we are violating our rule to have that atomic part. So what is transactional update actually doing differently here? Transactional update is also using snapper and serpent in the background. So we are just building up an existing technology and it will just put another layer on top of it. We also get two snapshots, one pre-snapshot and one post-snapshot. The pre-snapshot may be a bit strange if you are using it on a read-only file system. I'll get to that later. And the post-snapshot is not really a backup snapshot as it was in the snapper case, but it will actually be the working snapshot. Let's have a look at the graphical representation of that. We'll have the currently active system. Can I just paint in something here? No, I can't. We have the currently active system, which will get the backup snapshot and then we'll get a second snapshot where the actual update is actually applied into. During that whole time, the active system will just continue to see its own current root file system. It won't see anything about that snapshot over there where the actual update is done. So how do you get the system to actually see the new contents when the update was successful? Then the default B3FS root file system will be set to that new snapshot. You'll have to reboot that's the atomic part of the atomic update and then you'll be in the new snapshot. If you want to go back, you can just go back to any of those previous two snapshots and yeah, that's basically the whole magic behind it. Now maybe who has actually used transactional update already? Well, quite some people, but still a lot of you didn't actually see how it's actually working. Let's have a look at a system which is actually running. Just let me switch the display here. Can you see something? Okay. So this is a current cubic release which is one of those read-only systems and I've installed it a few days ago and that's the current updates. So what can I actually do? I can, for example, still type super list repositories and we can see that it's just a conventional tumbleweed system. Nothing special about it here. Sorry. It's gone. Okay. Back again. Yeah, I've adjusted that. We can see that it's just a conventional tumbleweed system without any special fancy things, but if we just one note, you don't have to remember all those commands. I've put together a cheat sheet. You can see what I've been typing in here. But we can see that the root file system is a read-only file system. If you have a look at the other mounts, you can see a lot of read-write file systems. I'll just get to it in a minute. So we had repositories listed. We can even do a separate refresh. And you may be wondering how is it even possible if it's a root file system. The first one, the repositories are stored in ETC. ETC has a special handling. We like the var directory, I'll come back to that later. Both of it is a read-write directory. So if I actually try to install something now, any suggestions? No, sorry. I still have to be able to use the system. I've taken three routes. Then we can see that this will actually fail because we are on a read-only file system. So that's where the transactional update, the command will come in. Transactional update is just a regular command, like you would expect it, with several operations. One of those operations is, for example, if you want to install or update our system, let's hope nothing has changed. Oh, we are getting a new kernel. Okay. Yeah. So now we have to wait. But now we can see what transactional update is actually doing. We just got those two new snapshots. And the kernel update will be installed into that new snapshot. We'll also get, of course, a new In-A-Di, which the kernel will be generating, and maybe even a new crop settings, maybe. All of those are handled in that snapshot. How long is it going to take? Yeah, sure. Mm-hmm. Yeah. Yeah. Just a second. Yeah. The question was, if there are several packages in a package update, or in a transactional update, will I get a new snapshot for each package that was actually updated? No. That's one snapshot, a transactional update call. Or two snapshots. We get that pre and post snapshot. But all of those updates will be applied in that one snapshot and that post snapshot. I have one question based on that. You just ran transactional update, and it didn't ask you if you want to apply it or automatically said yes to the question, if I'm so correct on the terminal. My first question is, so the transactional update is not interactive, or I think sometime I tried also the transactional update and asked me if I want yes or no. And when I press control C, I want to cancel my action. Basically, I think it removed the snapshot, but I was not sure if it removed the future snapshot or my even current one, which was fine. Whenever you type transactional update, it will depend on if it's interactive or not, depending on the command you type. Transactional update was developed as part of the cubic project where automatic updates were the main reason for it to actually work. So if you type up or up, those will be non-interactive commands. I'll do the second part now if I want to install trace out again. I'll type transactional update PKG in and the command. And in this case, I'll have to confirm I want to actually install it. So now we got transactional trace out installed. And now the question is how do we actually use it? You'll notice that line towards the end. Please reboot your machine to activate the changes and avoid data loss. I just said so before. Yeah, that's the question. That's the next point. We got a warning during the beginning. Just a second, I'll highlight it, which says, warning default snapshot differs from current snapshot. Any changes within the previous snapshot will be discarded. If you have a look at our current file system, you'll see that we are in snapshot number 10. If you actually have a look at snapper, you'll see that we are meanwhile at snapshot number 16. So what has happened? We got three snapshots based on snapshot number 10. All of these three snapshots are independent of each other, which means whenever I type transactional update, I'll get a snapshot based on our current system state. So please take care whenever you want to apply your changes, you have to reboot the system before doing anything else. Otherwise, you'll just discard your changes you just did. I mean, it's still available in that snapshot. You can boot that snapshot explicitly from the grab menu, or you can just say transactional update rollback, for example, 14. And then I'd have set that specific snapshot as the one that I actually want to use. However, please take care. Each call will just create another snapshot based on the current system, just ignoring the other things you've done in the meantime. So the question was still, we just installed trace route. If I just type it in here, as expected, it won't work. So I'd have to reboot into that new snapshot now. I just broke it if you took attention because I just resorted the snapshot to snapshot number 14, which didn't contain trace route. So I'd just have to read that system to snapshot number 16. Before I do that, I'll just create another snapshot, and I'll show you another command which might come in quite handy. If you want to see if your command actually did what it was expected to do, you can just type in transactional update shell. What that will do is it will give you a read write shell just before closing the snapshot for read into read only mode. And you can just see what you actually did. So if you just install trace route again, I've just combined some commands now. You can combine almost any number of commands if you want. And you can see we are stuck in a transactional update shell here. This time, you'll be able to actually type the command, and it will work. If you are leaving the shell again, then it's not working again because we aren't booted into that snapshot yet. Just a second. Yeah, I've already shown you with a snapper list, you can actually see the current snapshots. With find mount, you can see the actual snapshot where we actually are. If you want to see this snapshot we'll next boot into, you can type B3FS, subvolume gets default of the root file system so we can see the next snapshot we'll actually boot into is snapshot number 18. That's because our installation was successful, and the last successful transactional update run will always set the snapshot for the next boot. So that's basically it. I want reboot that will take another minute, which we don't have. So just a second, let's switch back to the presentation. As I said, I've prepared a cheat sheet. All the commands I've just been using are listed on that sheet. And yeah, that's the basic operation. So basically all the writing operations into the actual system will be replaced by a corresponding transactional update call. So I just mentioned that, I think repeatedly. Please consider that the age transactional update call will override your existing call if you're your previous run, if you didn't boot into that system. And be aware, if you're using transactional update on a read-write system, you will have a high chance of overwriting actual system things that actually happened. Because on a read-write system, nothing will prevent you from just modifying anything into the root file system. And you may be surprised that you don't get the files or the things you'd expect in the next run. So if you've, yeah, let's just ask, who have you seen any of the previous talks last year during either FOS-DEM or OpenSusicon about transactional updates? Oh, a small number, so I don't have to mention it. You may still be glad to see that var is one single subvolume now. You probably can't read what's on that screenshot, and that has a good reason, because every line which contains some of those red things there is one dedicated subvolume for one var directory that used to be the case up until tumbled, I think somewhere at the beginning of this year. Now it's all combined into one var directory. That has one specific reason, because previously we had not variable state in var, most prominently the RPM database. The RPM database, of course, has to be part of the snapshot, because you have to roll back to that specific state, the applications and the RPM database of one unit in that case. So it was moved to use this image RPM, you may be wondering about that strange path that's in combination with Fedora, we're also with project atomic having a similar problem, but now that's the reason we, as I said, it will be working with a unified var directory. I'll just skip the next slide. So this is called transactional update deep dive, so let's have some deeper look at it. What's actually happening in the background? As a user's perspective, you just don't have to know that, but what's happening is actually that there are several special directories which have to be taken care of. The first one is var, we just mentioned that, RPM packages stored in there, the RPM databases stored in there, but there are several other things, for example, databases will store the data in there, so that has several implications. First of all, we can't just roll back the var directory, because obviously you don't want to lose your customer data, for example. So another point is, of course, you have to actually have right permissions to that directory, because you actually want to store the information, and that means you'll usually have var on a dedicated subvolume or dedicated partition. Unfortunately, for you, var is not part of the root file system, so if some package will actually change the contents of var, that will be a problem, because it's not mounted in there, so this will usually fail. We have some special dedicated mechanism for it, so if you did an update with transaction update on the next boot after the update, a script will just see what should have been done in var and will generate those directories in files, but what won't work is if the RPM boots just modified something, the post scripts, for example, just modified any contents there, so they will have modified just to some bogus data, which has no direct relation to that data which is actually running in the system. There's another special directory, which is ETC, of course, also in a read-only file system, I assume you don't want to use the stock configuration files, but want to actually use them or modify them, maybe you have a configuration framework like Solve, like we just heard about before, so that one has to be another writable directory. For that, we have another solution, which is ETC is just an overlay. So whenever you modify any of the default files which were distributed by the distribution, you will just get another file in the overlay, the overlay itself is stored in var again, so that's another reason why that's an important file system. And just a second, what will now happen if you create a snapshot is you will get the contents of the overlay file system synced onto the snapshot itself. It has one huge advantage, namely you can just roll back to any older previous snapshot and you still get the correct configuration for that specific snapshot. Let's think about fill-up templates, which will just do merges if you install another major version of some application. You may happen to have different configuration file syntaxes, so you don't want to use the new configuration files with the old application and vice versa. So that's basically solving that problem. That also has the advantage the RPM packages can actually modify the real configuration files which are used in production. Unfortunately, that also means that we have to clean up the overlay file system somehow. So on the next boot of the system, if you put into the new snapshot, a script will run that will detect or basically it will just delete the overlay contents. With one exception, of course, the reboot doesn't necessarily have to be directly after your actual snapshot, so it may be that there are some changes to the ATC directory after you've taken your snapshot. So if it's detected that some files have been modified after the snapshot, then those files will still be left in the overlay of S. So they will probably be added to the real root file system on the next snapshot. There are special sub volumes, namely opt, valoc and bootcrap2. Those will be available in the snapshot. Because they are just there for regular third party packages, for example, for internal reasons. If your package is installing its data somewhere else, then you will probably get into trouble. For example, if your package is installing into SIV, that's not supposed to be used by packages basically. And that's not mounted into the snapshot. So whatever your package will be doing there, it won't be visible into the final system because it will always be over mounted. Let's skip that. House checker and reboot manager is something which we'll see tomorrow in Paul's talk. So let me emphasize that again. Basically transaction updates have the huge advantage that you can basically use any package we have in our OpenSUSE repository with the only exception that you have to take some care. If you are following the packaging guidelines and the file hierarchy standards, then everything will just usually just work out of the box. If you really need to do some handling and bar, then please have a look at the migration update section, you'll usually want to use a system descript which will basically do that modification, so that you want to do on next boot, namely when you actually boot that system that you have just updated. And yeah, in that case, you'll get the required changes. You can do the required changes. So let's just have a short summary of what transaction updates will actually do. It has one disadvantage, namely it's only compatible with B3FS because it heavily relies on its features and it's heavily using its features. But especially when comparing it to other distributions, for example Fedora or Ubuntu or CoreS. Then we have that huge advantage that you don't introduce any new techniques. We can just continue our existing packages. And yeah, that's basically it. The only thing you have to remember is to use transactional update the command itself. Transactional update is a general purpose tool, also in comparison to the other distributions. You can use it for almost everything. It's especially useful, of course, if you're using it together with servers where you want to, I don't want to interrupt your services or only want to interrupt your services in specific service intervals. And of course for clusters where you want to make sure that every cluster has the same state. So what's next? The last slide, maybe you want to see that in SLAS. That's the question we actually get quite a lot. There are plans to include it in the next service pack. It's not set yet because we want to resolve all the VAR packages until then. You may be glad to hear that we have a mechanism similar to DM Verity with IMA and EVM. We are just evaluating that. And the last point, I'll just get into that one. The question is why do we have to type transactional update anyway? Because a snapper also made it to integrate it into SIPA. That's something we are looking into, to also just use it out of the box without even knowing you're using a transactional update system. So yeah, visit Paul Gornan's talk tomorrow if you want to see something else. I hope we have 30 seconds left for questions. Yeah, up there? How did you decide to go to reboot this error? Either, usually you can just type transactional update reboot and depending on your configuration several things may appear. By default, it's using an application called reboot manager, which you can configure for reboot intervals. You can just say we have our downtime at three o'clock and it will just reboot the system. Then if it has to reboot it at all, maybe if the update just fails, then of course nothing will be rebooted. You can also configure it to just use the system to reboot. Then the system will be rebooted immediately. Or you can just type reboot manually by hand. Okay, Richard, how much time of your, may I steal of your time? Richard didn't complain. We can answer two more questions. Yeah, up there? There are ways to convert more related to server and the Rekus. The question is, can we convert an existing non-transactional system into a transactional system? No automated way. You can do it by hand, basically, by just setting the sub volume to read only and of course, modifying the FS tab entry to set it to read only. Please install transactional updates before, because otherwise you won't be able to update the system. That's more or less a manual way to do it, but it's not supported. Let's get it that way. You said just the problem is you do multiple changes for reporting that only the last one wins. The question is why do we only display a warning if a user is doing multiple transactional update calls without rebooting? As said, it's originally developed to be used in a cluster setup where you don't do any automatic steps at all. So by default, you'll have a daily system D timer which will call transactional update by itself. If something has to be updated, it will inform reboot manager which will then just reboot the system. Basically for testing, I think it's quite handy to be able to just type in the commands, but yeah, maybe we could get a bigger warning. Yeah, that's something we should consider. Final point, if you want to contact us or if you have any questions, you can find us on cubic.opencensored.org. You'll also find our communications channel there. So that's it. Thanks a lot for listening. Thank you.
How to update your systems without breaking them With the release of Leap 15 the new system role called "Transactional Server" will be available during the installation, so this is the perfect opportunity to have a look at the concept behind it and how to work with such a system in practice. In this talk we will have a look at transactional-updates from different angles: * The basic concepts behind transactional-update * How to use the Transactional Server or Kubic (Users & Administrators) * Packaging for transactional systems (Packagers) * How transactional-update compares to other solutions from various distributions (Developers) * Recent developments in the transactional-update world
10.5446/54508 (DOI)
Oh, hello. Okay. So, yes, I'm Richard Brown, the Linux distribution engineer working on Qubic at SUSE and Alex is going to introduce himself. Hi, my name is Alexander Herzegam, the release manager for SUSE Containers and Service Platform. Yeah. And we're presenting... Well, yeah, we're presenting how we work together. So, the open SUSE Qubic, what is open SUSE Qubic, and how that relates to SUSE's cast product and, yeah, sort of how we're collaborating together from the sort of SUSE and open SUSE side of things. But just before we start all of that, I wanted to kind of give a little bit of a picture painting history lessons sort of thing. And why... What is this whole containerized world? Where are things going? What are things looking like? Why are we doing this? Why are we here? And lately, I've been getting more and more into retro computing and thinking actually about my first computer, which was a Commodore 64. And back then, a computer was completely disconnected from the world, sitting in your home, plugging into a TV, and you're just happily hacking away on this one disconnected device. But if you think of the world today of the computing you're actually using in your hand, user computing, everything... You've got this massive sort of priority of different devices, smartwatches, phones, our computers on the desk, our computers on our laptops, our mobile stuff. And everything here is all interconnected in some way. Probably talking to some server somewhere in some data center, maybe more than one server, actually more than three servers, in fact, so many servers that in the end, we just stop talking about servers and start calling it a cloud, but it's still ultimately a whole bunch of servers. And this fact of computing, now sort of the general consumption point of computing is some... You have some end user device and really a lot of the work is being done by some other thing in some back end somewhere, means the world is actually a very, very different place. And everything is way more interconnected, and that means everything is way more complicated. You see that not just reflected in sort of the very sort of tangible sort of network servers, racks and clouds and that kind of thing, but even down to our software. The software we're writing these days, more and more, has a million different modules talking to a million different things. And that just breeds complexity and confusion and difficulties with maintaining them and all this kind of stuff. So the general trend is towards turning everything into a module. Every software is trying to get more and more modularized, more and more containerized and delta in a way of a smaller, easier, more manageable unit of consumption. So it's easier to figure out how to maintain it. It's easier for a developer to ship that software and have it reused in different ways or have it used interchangeably. So basically trying to turn computing into a collection of Lego bricks. And this isn't just to try and solve the complexity problem or the interconnection problem, but also a case of operating at scale. The world has got bigger. More people are using all of this stuff. More people want to use all of this stuff. They want to have services that can scale up and scale down depending on the amount of users you have or the amount of users that might be using this today or might not be using it tomorrow. And when you then start thinking about our old computers, our Commodore 64s, our personal server sitting in a rack somewhere, we used to treat our servers like they were pets. We would give them a name. My servers were all named after Jedi Knights from Star Wars films. And we lovingly look after them and we patch them carefully and we micromanage all of the configuration. But that doesn't work when you're doing all of this stuff at that kind of scale. When you have so many servers that you don't have enough Jedi Knights left to name them and you can't SSH into each one and individually figure out how you're going to configure that ETC file on that machine, you don't want to treat your servers like pets anymore. You don't want to treat your machines like pets anymore. You want to look at it much more like cattle. Just number them, put a tag on them, use them. If they end up causing you problems, kill them and eat them. And just move on and have this constant farm of computers doing your work for you so you can move faster. So you can use this new software faster and faster and so you can deal with this world that we're now in. Not just from a community perspective, which I'll be talking about more lately, but everything I just talked about here applies equally true in the business world. Suze's business customers are trying to move in this world wanting to use technology faster and at higher scale, at higher pace of change. And therefore the case becomes how does Suze as a corporate company have a platform that kind of addresses these concerns? And this is why I hand over to Alex to talk about CASP. So thank you. There you go. Let me check. Does it work that way? Down. Down. One more. Down. Down. There we go. Here we go. Sorry. Okay. What I brought to this speech is I brought down Suze CASP platform and divided it into layers. I will go through them now one by one and just introduce you which layers we have here so you get a better understanding of what CASP platform is looking like, but this also applies for cubic with some other namings here and there, obviously. So the first thing we have here is we have an infrastructure layer. So even the cloud has some kind of infrastructure that needs to run on. So Suze CASP platform is capable of running on physical servers. So if you have some bare metal in your storage, you can get it out and install it plain there. You can also run it on your desktop machines if they are powerful enough. More also, what a decent idea is if you have some small factor PCs which you can easily or conveniently stack on your desktop and you can run a physical cluster on your desktop if you have some small machines. Breast per pie is not yet enabled, but this would also be another option once we start supporting the architecture. Then of course, we support a couple of hypervisors. So we have VMWare images out there. We have images for Hyper-V, also for KVM and Xen. I just realized the clouds are broken. Yeah, but yeah. Clouds broken? VMWare OpenStack AWS. Exactly. So we are running the classical hypervisors. We are running an OpenStack. So if you have an OpenStack instance, you can scale out. We do a lot of testing also on OpenStack because it's pretty convenient. Then there is, we're supporting public clouds like AWS and the future also Microsoft Azure and Google Cloud Engine. So you will find our images there soon. Yeah. Then there used to be an operating system which in our case is, this was a micro OS. It's on purpose or for purpose built. Susie Linux Enterprise based at the moment, it's less 12 SP3 based operating system. We call it micro OS. Don't get confused a bit by the name. It's not called micro does not aim at being small, although we're trying to be small because you're running maybe hundreds of distance, but this aims for the name of micro service or SOS or we're micro service oriented here, which is one of the use cases for having CASP. So it can run all of those different modules in all those different containers. So having a bundled operating system in your CASP stack means that you're able to install it wherever you want. You can configure it. We have a transactional updates. We have a talk about or we heard a talk about already from Egnats. Tomorrow. Oh, it's tomorrow. Sorry, I was on the wrong studio. Sorry. You were more about transactional updates, which is really neat feature especially for cluster computing because you have zero downtime for doing updates. And transaction updates also moves out into the SUSE world. So it is applicable on Leap and Tumbleweed. I talk about that more later. Okay. Yeah. Then you're able with the operating system to debug. We have a tool chain module now enabled where you can debug a lot of stuff. And also you're able to install third party tools like monitoring or whatever this is needed for your purpose. And on top of that, there's running some kind of container execution or container engine. At the moment, we are running the Docker engine as the first container engine. And in the next version, there will be a tech preview that we offer the possibility to run cryo as a container engine if you're for whatever reason would like to or prefer it over Docker. Yeah. With that, you'll get access to the SUSE registry, which we roll out pretty soon. So you will find sign containers in there. So one of our other projects, the SUSE Cloud Application Platform already uses the SUSE registry to ship their product in the SUSE registry. So you can download it only from the registry. So there's containers. So containers usually set to be very small, but they have containers with about six gigabytes. So this is a pretty, pretty huge thing there. So that's not convenient for someone when it goes to container because they're considered small. Yeah. And if you run a lot of containers, you want to contain orchestration because you do not want to run your or monitor your 50,000 containers on your own. So you need some kind of orchestration and the quasi standard or the to be standard or the most used standard is Kubernetes. Who of you have has heard of Kubernetes before? Yeah. Okay. Almost everyone. I'm telling nothing new here really, but let's go a bit into detail here. So it consists of two parts. We have a container scheduling, which takes care that your services are almost running all the time. So who can give you 100% but 99.9 and then it becomes expensive. It provides you for tolerance and high availability, so Kubernetes takes care that your container always runs if you define it that way and make sure that your service is available as you have defined it before. On the other side, there is container management, which gives you control over your containers. You can define in which environment your containers are about to run, how much resources you containers should get, where it could run. So if you have a hybrid cluster consisting of bare metal or virtualized environments, you can define where to run it. If you have some heavy duty services, you can define that they should only run on bare metal if you decide or if you set up is like that. Yeah. And then there is another SUSE specific thing. That's your cluster management and there we have Vellum, which we develop in-house. And Vellum is a UI for your purpose where you can bootstrap your cluster, where you're going to monitor your cluster. Here you can set up, you can see the health state of your nodes, you monitor wild bootstrapping, which nodes are good, which is, you can see the update status here. But Vellum doesn't care about the containers themselves. Exactly. So that's why I said here it's cluster management, not container management. It takes care of the nodes you register for your system and you can control them here. You can add nodes to it. You can remove nodes to it. You can update nodes. You can define the update policy you want to apply to your cluster. Then there's an optional layer. If you need that, you can install an application ecosystem like SUSE Cloud Application Platform that abstracts your applications even more. If you're interested, go on our website and look for Cloud Application Platform to know more about that. But this is an abstraction layer when you do not want to handle each and every container on your own. You can use SUSE Cloud Application Platform, which is based on the Cloud Foundry project. And last but not least, SUSE Cloud Platform is designed to run cloud native applications. So you have a very special way of how cloud application should run. Did I write it here? Yeah. It should be operational, observable, elastic, resilient. And of course, agile. That buzzword may not be said. So like that. And yeah, with that, back to you, Richard. Thank you. Yeah. So the brief story of Qubic, because it's a very, very young project, and it only started last year, actually at OSC last year, really, is Open SUSE, both inside SUSE and outside, has kind of taken a look at what SUSE is doing with this CASP model, looking at these layers and trying to use these technologies in an enterprise sense, and kind of really looking at what are the applications in a broader community sense. So yeah, like I said, started a year ago. It's a subproject in the Open SUSE project, so yet another under that big umbrella that we have these days. And it's focused on all of these different container technologies. Most of them that you mentioned in the stack there, but also sort of broadening out a little bit as well. And we've now become the upstream for CASP and the CASP program. So really, I mean, we're kind of like the Martian explorers for this whole container world side of things. It's obviously similar. We're coming from the same basic ideas, but already in a year, things have grown quite different, and that's really quite an exciting thing. So we're independent from CASP, like everything in Open SUSE, independent from SUSE. We're basing all of the work we're doing inside Open SUSE tumbleweed. I'll go into more details about that later. And more and more, we're targeting the latest upstream container technologies. So for example, there is vellum, and vellum is contributed to as part of the cubic project, but in addition to that, we're looking at CubeADM, which in the last year has kind of flooded onto the scene. That's really becoming the upstream cluster bootstrapper for most Kubernetes clusters. And there's some really cool stuff that only vellum can do that CubeADM can't, and there's some really cool stuff that only CubeADM can do that vellum can't right now. So we're kind of looking at that and playing with that. A lot of the stuff with CRIO and Podman is happening inside the cubic space. We have transactional updates and all of the development we're doing there, and I'll talk more about that in a little bit. And yes, since starting Qwik, we've completely and utterly re-engineered the installation routine, so it's a million times more customizable than what the poor customers of CASP are using, so we can prototype new stuff and play around and generally have a lot more options in there. So in a nutshell, though, what are we really looking at is pretty much anything our community wants to look at in this container space. So for example, Paul has his talk on Sunday in the hall next door at that time, and that's an example of this. Yeah, exactly. There's Paul. And he's been experimenting with some stuff on Qwik. He'll be talking about that. In fact, I don't think you're using Kubernetes at all, so it's like a perfect example of the kind of crazier and more interesting stuff. Yeah, so please come to his talk. I'm going to focus a little bit on the transactional update stuff, although Alex already talked quite a fair bit about it. And with all of this highly orchestrated broad, having clusters of machines running all of your services, looking after all of this stuff, this old sysadmin maxim is more true than ever. If you've got a cluster of nodes, even if it's five nodes, but if it's like five nodes on two different clouds or whatever crazy arrangement you might have for running your clusters, you never want to touch that running system. It's just more work than it's worth half the time. But at the same time, you've still got to be secure and still got to be patched and still got to deal with those issues. So for that, we have transactional updates in cubic. It's an update that is totally atomic. It happens in a single operation. It totally applies, and the system totally changes to the new version of the operating system, or nothing at all happens, no software is changed, no libraries are changed, everything is just left exactly as it was. And as part of that, none of those changes happening while the system is actually running because your services are up, your things are running, you don't want to risk anything, you don't want to swap things around. And doing that properly in a single transaction and with the technologies we're using, we also wanted to be totally and easily to roll back. So you do make a change, it all happens in one easy swift move, and then when you test it or you then run it and you realize it doesn't quite work the way you wanted to, you can throw that update away and immediately get back to running the system exactly how it was before you changed anything. And yeah, so transactional updates were originally designed as part of microOS on the Casp side of things. It's really become like the core feature inside both cubic and Casp. It's definitely the moment I think it's the most exciting thing we're working on. And as you saw in the Leap 15 announcement today, this feature is also available as a transactional server mode in Leap 15 and Tombleweed. So you can have a Leap 15 machine using this as its update mechanism. Just pick the transactional server system when you're installing it. But I won't go into any more details because Ignas is over there, raise your hand, wave. Yeah, he's in this room tomorrow, going into more detail about that. So you can see how that all works and how to use it and have fun with it. So yeah, with this different view on things, with OpenSuser, sorry? No, oops, I thought you said something. Yeah, with this slightly different view on things and looking at slightly different things from what Casp is doing, how do we actually work together? Well, the story of cubic and Casp working together isn't really that different from the story of OpenSuser and Suser working together. Tombleweed is the star Suser factory. Whatever, if it's Suser or OpenSuser, it doesn't matter. We're all building on the same code base. And Tombleweed is there as our nice, stable, always working, always tested, always usable code base. It's the base system for all Slee versions and even changes for service packs are going to factory first, which is the process we call factory first, which is a key part of all Suser Linux enterprise development. All development now follows the factory first policy. Almost everything ends up complying with that policy. There's always some exceptions, especially in service packs, obviously. Tombleweed moves so far that not all changes make sense to some libraries need backporting and that kind of thing, but the intention is definitely still there. And the main benefit this brings both Suser and OpenSuser from the Suser side of things, it makes features a heck of a lot easier to get into Slee. It makes, in fact, the transactional update feature being one example of that. It makes it easier for Suser's partners and the community to contribute into the Slee code base. It makes that all more stable, which means it's all nicer, more stable, more usable for everybody, which in turn makes everything nicer, more stable, and more usable for everybody who's using OpenSuser Leap, because that's where it also ends up afterwards. To display this sort of diagrammatically, it looks something like this. Everything from, well, everything that Suser cares about for Slee comes from Tombleweed when they start a new Slee code base, and when they're working on a new Slee service pack, everything they possibly can take from Tombleweed comes from there. With Casp, it's pretty much the same idea, but we have this thing called QBEC. So in essence, QBEC is this subproject, we're focusing on this container stuff. From a code base perspective, though, every QBEC and Tombleweed are pretty much interchangeable. It's the same repositories, it's the same code base, it's the same project in OBS. It's a different installation media, and it's a different installation routine, because we're focusing just on this container side of things. It's really just a derivative distribution of Tombleweed. But all of the code is the same, and to change something into QBEC, you change something in Tombleweed. In the same kind of, mostly the same kind of sense, Suser Casp platform is a derivative of Slee service packs. So if Suser wants to change something inside Casp, they can change that in that Slee service pack, or if the software in question doesn't come from that Slee service pack, then they pull it from QBEC, so all the kind of container-y stuff that doesn't exist in general Slee is generally being pulled from QBEC the factory way. So to do that diagrammatically, you end up with something like this. So Tombleweed feeds into Slee, Slee feeds into Casp, and Tombleweed and QBEC all share the same code base, but the QBEC bits that are interesting for Casp end up in there. So to kind of put that really a little bit too simply, all opens to the development, starts in Tombleweed, or Slee development starts in Tombleweed, Slee is based on Tombleweed, QBEC is based on Tombleweed. Tombleweed is kind of the heart of all of this, and if you really want to work with Casp, you're working with Tombleweed and Slee is a derivative of both of those two things together. So if I've interested you enough that you want to start contributing to this and changing what we're doing and having here, seeing what we're doing, we could do with more people testing it. We have our ISOs, they're working quite nicely, but we're testing them only in OpenQA right now, there's more manual testing is always useful. So you can just go to the regular Tombleweed download page, there is a cubic option there, you can install it on bare metals, you can install it on VMs, the installation routine like I say is nice and changed and simplified, so in fact it walks you through the system roles in far more detail than any other OpenSuser distribution does. And when you find bugs, because I'm sure you will, you file them in Bugzilla in OpenSuser Tombleweed as part of the cubic component, if you think they're cubic specific. Generally speaking, a lot of the bugs are shared between Tombleweed and Cubic, so if you file a Tombleweed bug, we'll fix them. There too. Obviously, though, in this whole kind of agile, cloudy world, a lot of people don't want to handle messing around with installing installation media, so we are working on VM and cloud images. That is the URL for the OBS project where they are right now. The DevL project is there, in fact, there is a factory project for them as well. But the cubic team, when we started looking at this, suddenly realized Tombleweed doesn't have any VM images, and in fact the Tombleweed release process doesn't have any way of releasing VM images yet. So that's something we're actively working on with the Tombleweed team at the moment, figuring out, okay, we can build them, how do we test them, how do we do them as part of the Tombleweed release process, so when there's a new Tombleweed snapshot, we're not just publishing a new ISO and a new repo, but we also publish a whole bunch of VM images for that as well as part of the whole thing. We're using for that kind of effort, we're using the factory mailing list, we're mainly using the factory IRC channel, and we also have our own cubic IRC channel as well. If you're a packaging or interested in packaging anything, sort of, container-y, we have the... I really want to change the name, because obviously it comes from CASP because they started first, but yeah, the develop CASP controller node, develop project where most of the cubic specific stuff is being incubated. It's a standard develop project following the same rules and processes that we generally have in OBS for Tombleweed. We could really do with some help with packaging, especially the more interesting new fast moving upstream stuff, things like the latest cryo and podman and the whole project atomic build tooling has a lot of very interesting stuff, and we do have versions in there, but they're moving quickly, there's different ways of doing things, so please feel free to contribute. On the vellum side of things, even though we're looking at QBADM, we still wanted to actually help with vellum, and I'm looking forward to the day where vellum is a key part of the cubic standard cluster. All that development is being done in Git. Vellum is mostly a Ruby application, but there's two components with vellum. You've got the front end and a lot of the logic happening in Ruby, but the execution of changes to your cluster, the orchestration of bootstrapping the cluster, it's all actually done using salt. The repo name is a little bit incongruous, it's not quite true, it's not another copy of salt, the binary that runs the salt stack. The salt repo in the cubic project actually has all of our salt states, all of our configuration for salt. That's where we have all of the salt scripts and the salt profiles where we're defining how to change something on a Kubernetes cluster, how to bootstrap the cluster, what to do, when, how, etc. That's something we could definitely do with a lot of help with, not just updating the pace of stuff in tumbleweed, but those modules are incredibly useful for people who might be interested in just using them bare for running their own Kubernetes cluster separate from using the more broad tools. There's a lot of very useful knowledge there for starting up a Kubernetes cluster, moving the initial containers around and getting the base level done. But right now, it's all very much focused on the CASP stuff and we want to make it more usable and more flexible for dealing with the faster stuff. Please feel free to go there. It's GitHub issues, pull requests, it's all open and very, very, very nice to use. Almost last, we have the cubic website itself. We have the summary of everything I've talked about here. But also, we're trying to turn it into a very active blog slash community for showcasing what is happening in the cubic world of things. That's where you can read all the latest stuff about the transactional update features in leap and tumbleweed, where you can read about the stuff we're doing with Podman. If you're doing anything interesting in openSUSA in the container kind of side of things, we're interested in taking that blog post and putting it on there. It's very easy to contribute to articles because it's just a bunch of markdown in the Git repo. So please send us a pull request. And yes, last but not least, anything else. It's an open project. We're keen to see what, you know, this is a very fast moving area of IT of the world. So if you have any ideas of what you'd like to see inside Cubic, please get in touch. We have our mailing list. We have the Cubic IRC channel. Please join us and start with that, I guess. Does anybody have any questions, comments? Yes, Sergio. Yeah, sorry. I think this one might be charged up enough, so we can post. It is about these cubic images because if cubic is the platform to run the containers, I don't understand what are these cubic images. Are there the end applications that will be run in the containers on cubic or are images of the cubic itself to run something? Yeah, so good question. So they would be the VM images for your cubic cluster, for your cubic platform, so the OS bit. It's fair to mention, actually, I just realized I've totally neglected to mention the stuff we're doing on the container side itself. So yeah, if you indulge me for a second. In addition to all of this, in the Cubic team, we have people now working on the base containers for open SUSE and SUSE distributions. So for example, the official tumbleweed container in the Docker hub is being done as part of the cubic project. And that's, if you now, and same with Leap, which I think has also been upgraded to Leap 15 today already. So you can do your Docker pull from the hub, from that registry, and get a proper built properly, tested properly, open SUSE style, open SUSE quality container image. And one thing we're working with, but we're not quite there yet. It's a big collaborative thing. The OBS team have a bunch of features going into the open SUSE build service. And there is now a website you can already go to, registry.openSUSE.org. And that is an official container registry for the open SUSE project, and it basically reflects every single container built in every single project, in every single part of the OBS. So anybody with a home project in the OBS, putting a QB file there and building their own container, you can get that container from registry.openSUSE.org. It's all signed, rotored, done properly, because the Docker hub doesn't do that right. Ultimately long-term, that almost certainly will be the official place to get all of the containers for running things like Vellum and Kubernetes on a cubic cluster. It's just a case of not really being there yet. So we're doing the ISO images and stuff done, VM images next, and that release process part of Tumbleweed, which we haven't figured out. The container images will be the step right after that. That will probably answer it at the same time, but just, yeah, priorities. Cool. Next question. Thanks, Alex. What architectures do you support with cubic? Only Intel or? At the moment, only X8664 Intel. I have absolutely no problem with talking about doing any other architecture. So in my home project in OBS, there's a little subproject called cubic underscore rpi, and it works, kind of. Most of the issues that are actually stopping it working aren't issues with the cubic base or with ARM in open SUSE. Most of the issues are things like Kubernetes, where a QBADM bootstrapping has a million different timeouts, and there's no way a Rosbypi can do anything that quickly. So, yeah, it's kind of a case of dealing with upstream to make it more armor-friendly. So yeah, definitely, that's in the kind of everything else category. This is one thing I would love to see people help push us along, because I kind of like the idea. Next question. Yeah, Panos. There you are. It's not a question, it's just a comment that I would like also to highlight what Richard said is that cubic is a new project, and basically we can save it based on the community needs. So just to give you an example that this is true, a couple of months ago, I was looking in the rootless containers and what we do there in open SUSE, and now we have the test in open QA. So we're really proud of that, because the cubic might be only distribution out there that makes sure that rootless containers and OCI, open containers, initiative style of standardized containers are working for us. So there will never be broken there in that case. So if you have any use cases, bring them forth, and they can be a reality. Thank you. Cool. If there's nothing else, then thank you very much.
The Kubic Project is an exciting new part of the openSUSE family. This talk will provide a brief introduction of the Project and how it focuses on container technologies such as the Docker & Podman runtimes, Kubernetes, Transactional (Atomic) Operating System updates, and much more. The session will then go into detail how Kubic provides the base for SUSE's Container as a Service Platform (CaaSP), explaining how Kubic serves a similar role to that product as Tumbleweed does to SUSE Linux Enterprise, and explaining the relationship between CaaSP versions, SLE versions, and Tumbleweed. Finally, this presentation will be an opportunity for those interested in Kubic to learn ways they can get involved with the project and contribute, regardless if their interest is containers, orchestration, testing, or atomic system updates.
10.5446/54509 (DOI)
Hello, welcome to my talk behind the scenes of the OBS team. So Anko already did the joke yesterday with the TV's like a rockstar. I'm just relieved that I don't need to carry the microphone, the beverage and the ticker because yesterday at the lightning talks everyone was shouting at you as soon as you put your beverage somewhere. So I'm just happy. So behind the scenes of the OBS talk team, that is Leo the line that is behind the scenes of the MGM video shooting. So every MGM movie that you see, you see the line at the beginning and that is from I think 90 years ago how they shooted this. So to just quickly bring everyone up to speed, OBS is our distribution system. What we have is open SUSE and it's SUSE. So every package and every appliance or ISO image what you install is built by OBS. So it's a way to build packages for different output systems like RPM or DBN. You can build different images like ISO images or images for the cloud or even containers. So OBS is basically, that is just a very easy overview of how OBS is constructed. So you have the front end which is basically the API and the web interface. What you as a user usually use to talk to OBS. And then you have a back end which contains different service like a source service and web service and workers which do the actual job. And the front end uses a database and a cache. But there are also subsystems for the front end. For instance, what we recently introduced is application health monitoring or the rapid MQ bus what we introduced last year. So that are some subsystems. I'm just mentioning that because later on it will be important how we set up our development environment and why we do it like we do it is because of the setup. So the OBS team, so we are basically responsible for development of the front end. So the API and the web interface. And we are also responsible for deploying our reference installation, build open-susers.org. Most people of you probably use when you package. Sorry. So yeah, so we are responsible for build open-susers.org and for that development. So our team is, so we are all hired by SUSE. So we are working full-time at the build service. We are a distributed team. So all our meetings happen online and go-to meeting that was last week at our stand-up meeting in the morning. So I'm in the right corner. And David, David is also here. So yeah, so we are a distributed team. So all communication happens online. And we are distributed to two different locations. So we have some developers sitting in Nuremberg in our SUSE office. And we have some developers sitting in Gran Canaria. Yeah. So yeah, basically we are only distributed to two different locations. So the next few minutes I just want to talk a little bit about our workflow. So how we organize our work, how we work together. How we decide on what we work. And it's basically, so we do not decide what we, so we do not wake up in the morning and decide, okay, we implement this feature. It's more or less, it's coming from the users. So they will create an issue in GitHub or they will create a FATE request that they want a feature implemented or they want an issue fixed. And then we mostly, or we are organized in Scrum. That means Scrum is an HMI methodology. And our product owner prioritizes all the issues and the feature requests and then he decides what we will do in our next sprint. And then we start development. So after development, we have some automated tests to make sure that we do not introduce regressions, that the code quality is right. After that we do a review. So we do at least a peer review. Often even more than one additional team member needs to review it. And the last step is we ship it to build OpenSUSE.org. So that is for us very important that we try to deploy as often as possible. So to talk a little bit more about Scrum. So we use basically Scrum by the book. So we have two-week sprints. We have a planning meeting at the beginning of the sprint where we sit together with our product owner and talk about what we want to do the next two weeks. We have at the end of the sprint, we have a review meeting where we show what we did the last two weeks to our users and also to our product owner. And we have the usual artifacts like a product backlog and a sprint backlog. So talking more about the meetings, we try to make the review meeting or we do the review meeting public. So we usually send an email at the moment only to our internal SUSE mailing list and we invite people from other teams to join and see what we did and give us feedback. We also talked about recently that we want to try to make it in general public so that people who not work at SUSE and from the community can also join and our review meetings and give us feedback. So what also happened the last few years is that we hired a lot of new developers. So in the beginning there were only one, then we started to hire two new people. So at the moment we are six and next month we will, or the next two months, three new team members will join. And with a big project like OBS, it's really hard to bring them up to speed. So what we often do is mob programming. So that means the whole team sits together and works on the same problem. So for one afternoon or two hours or something, we sit together, talk about what we need to implement and then somebody opens the editor and everyone works together on the same problem. And that really helped to share knowledge, to bring people up to speed and it's also really fun. Then often during the sprint, so it's the main feature or one feature of Scram is that you have an absolute two weeks where you cannot get interrupted and you can focus on the work. But sometimes you have work which cannot wait for two weeks. So we introduced a role of demolition squad which rotates every week. So one person of the team is responsible for the demolition squad is taking care of, for instance, security issues coming up or dependency updates, deploying our reference installation, taking care of exceptions which happen on our build, open source installation. And yeah, at the beginning when we started, so we did two persons work together in the demolition squad. So one more experienced team member, one not so experienced and then after a while we started that only one person does it and we rotate it every week. So it turned out to be really useful, so especially for deployment and stuff like that. And speaking about deployment, we do continuous delivery which means every commit what we do is potentially can deploy to production. That doesn't mean that we deploy every commit to production. We try to deploy as often as possible. We try to deploy every day but it doesn't happen automatically. So somebody in the team needs to trigger deployment manually. So that is something that we even want to improve more in the future and automate it even more so that we, with every commit to master, we deploy our instance but that is not the case at the moment. And if you're curious what, when was the last deployment or which version we are running so you can go to build open source org slash about and then we just recently introduced it the last deployment and you see the date. And it is, so you see, I just checked it before the talk so the last deployment was on Friday morning and you also see the commit hash so you can go to GitHub and you can check if, maybe if you send a poor request if it's already deployed and check what version or what commit hash is a master and which is deployed. So that was mainly about the workflow. The next few minutes I will talk about our tools and what tools we use and why we use them. So as I already mentioned, the artifacts, artifacts what we have in Instagram is basically a sprint backlog and a product backlog and we organize it in Trello. So in general Trello is heavily used which is a Kanban board which allows you to have different lanes and you can create cards and then the cards, we have the user stories written down and then we, you see here for instance we have the to be done or the sprint backlog which means that is what we work the next two weeks, then we have the doing lane, when somebody starts to work on it, we move it to doing and when it's done, then it goes to review and then it goes to done. So that about Trello and containers. So you saw in the beginning how OBS is structured, so you have the front and the back end, you have a database and a cache and then some subsystems. So it was really hard for development to set it up. So when you started, if you want to hack something, it was really complicated to set it up. So we first started to have a vagrant box. So vagrant is just a command line tool to manage virtual machines. So we started to do that and it worked really nice and we had it I think for more than one year and last year we started to use containers. So we switched to Docker and to be honest, at the beginning I was a little bit concerned and I wasn't sure if this is the right movement but looking back now it was really, really good decision because it makes so easy to plug in more stuff like the rabbit MQ bus, you just have another Docker compose file and you apply it on the default Docker compose file and then you have the rabbit MQ running. So it's super easy to plug in more stuff. So every part of the application, what I mentioned in the beginning, the back end and the database and everything is a single container and you just spin it up and also if you mess up your development environment, you just destroy it and start it again. So it's super easy and super convenient. And so in the other talk what we had this morning, so we already introduced the Docker registry what we have now. So all the containers what we use for continuous integration are already built in OBS. The containers what we use in development are built on Docker Hub but that we will switch the next few weeks. So that is we will switch completely to OBS build images. So in speaking about the code, we use GitHub. So everything is organized in GitHub. We use mostly issues. So if you find an issue or if you have a feature request, then please go to GitHub and file an issue. And as I said, we will have a review. So somebody of the team, if you send a poor request, somebody of the team needs to review it and you also see we work quite a lot with labels. So we try or we have different labels so that you already see it at the first glance, okay, this poor request is relevant for me or I need to review it. So that is really heavily used. And we also use GitHub status for poor request. So you see here different states of the or different services which run on the poor request and report if it is safe to merge this poor request, for instance, VFRKiri for security issues which scans your Ruby application and scans it for dependencies and issues and reports if there is some potential security issues. We have code coverage which checks if you introduce new code, if it is covered by tests or if you are decreasing the code coverage and we use Travis for continuous integration. So Travis is a hosted service for continuous integration and what we use there is a beta feature, it is built stages which is really cool. So you have different stages of your continuous integration process. So the first stage for us is linting which means some Ruby corp for instance runs and makes sure that all the code styles are applied. And that we do because you only have a certain amount of build jobs at the same time. So if the first build stage fails, the other build stages do not start. That means we do not pollute our build queue. And then we have four more build stages or four more continuous integration runs. So one is for every test suite what we have, one run gets run. And what Travis does basically, it starts for every build stage Ubuntu box. And that test some, we had some issues because we needed to package especially for Ubuntu to make the Travis run. And also Ubuntu is not a supported system from us so it did not make sense to test on Ubuntu if it is actually supported. So what we did is we switched also to containers and we run on the Ubuntu box we run now our sleek containers which we built in OBS. So we built in Travis on an Ubuntu box we run Docker containers which run our tests. And so we just did it a few weeks ago and it started that the tests we take longer. So can somebody imagine why it takes longer? Somebody some idea? So the issue was we did not run all tests in Ubuntu because we had the test suite it was checking the base system if it is less or if it is open suose and if not then skip the test. So we looked at it or when we had the progress we figured out okay it takes five minutes more and it cannot be just because it runs in a container but it was because we skipped some tests. So yeah. Then the front end is a race application so we heavily rely on the webchams so we use step full which is a hosted service which automatically sends a pull request as soon as your dependency updates. So it checks frequently if there is an update of a dependency, sends a pull request and you see in the pull request that only one gem gets updated and you even get the release nodes and you see what does change. And that is really nice because before we just once in a while we updated the dependencies and then you had 20 changes and if something broke you don't know what is going on. So then we even automated it a little bit more because we need to repackage all the gems in OBS to make it installable on open suose or on the appliance. So we automated here that it checks if the package is already in OBS, if it is in OBS which version is it and if not what command you need to execute to update the version. So for instance the first one is it's already up to date in the devil project you need to execute a set link ref to update the link. Then as already mentioned we make use of commands and peer review. And what we also recently introduced is review apps. So review apps are disposable apps which run the code of your pull request. So when you create a pull request we have a label you can append the label review app and it will spin up automatically an app with the code of the pull request. And that is really nice for UI changes because usually we work a lot with screenshots and you attach screenshots and then developers usually are lazy so you don't check out the code you look at the screenshot and say okay it's fine. And look at the code but sometimes it's really useful or you really want to check out the workflow or check what changed and for that review apps are really awesome. That is we implemented traffic that is some reverse proxy which so we just check out the pull request run Docker or build the Docker container, spin them up and then the traffic reverse proxy make sure that it gets linked to a different URL. So after pull request is merged we build an appliance or like a virtual box image or an ISO image and the last step or almost last step is we test it in open QA. So we do a smoke test. So we spin up the appliance, log in, start the server, build a package in the server and check if this works. So we do a full integration and actually build a package and check if it works. And as we also take care of the build open source instance we use Airbit to make sure if we deploy something and exceptions are happening that we realize what exceptions are happening and that we can create issues right away from Airbit and make sure that we can deploy a fix. So that was it about tooling, now talking a little bit about community. So we have a blog and open build service.org and we frequently release blog articles and one thing what we also do is we write post mortems. So when we deploy and sometimes something goes wrong so we sit all together in the team and we really try to knock down what went wrong. So we ask why did it happen and a few times why did it happen, what did we wrong and how we can make sure in the future that it doesn't happen again. So that was really beneficial for us to improve our workflow, to improve our tooling and also I think it's nice for the community and the users so that they also know what is going on and what did we wrong and maybe even they don't do the same mistake what we did. And we also release every two weeks when the sprint is done we release a sprint summary. So we write down what we did the last two weeks and it's actually one part of our acceptance criteria of a sprint card is that you add something to our sprint report. So the whole team works on it so we have a document and then everyone adds his stuff what he did to the document and at the end of the two weeks somebody takes care to release it and sums it up and makes it nice. So except that we have the open-susers-build-service channel, we have OBS headquarter at Twitter where we usually also retweet when we have a blog article released and we have the mailing list built open-susers org. That is basically what I wanted to tell you, what I wanted to show you and a few questions I'm happy to answer. Yeah, much yes. So first of all I would like to ask who is the product owner because you told him and the product owner is Art of the Unschlutte. It's a shared role, it's Art of the Unschlutte and Michael Schlutte. So they share the product owner role together. So they are both working at SUSE. And they prioritise the issues. Exactly. And the second question is how do you plan to automate the deployment? Sorry? How do you plan to automate it and to make it because you said it was manually? Yeah, it's manually. It's done by RPM packages and the reason is that's also how we deliver it to our customers. So you can install OBS or to our customers' users. So you can install OBS on your server if you want to and we deliver it as an RPM. And that is just drink your own champagne or eat your own dog food approach that we also install RPM packages. How we want to automate it more that is to be discussed. So that is not decided yet. What? Yeah. But we don't want to use containers at the moment. Because of several reasons, for instance, we only use containers in development and CI now for the front end. The back end is completely different. So what I was talking about deployment was only about the web interface and the API. And we do not want to deploy it different than the rest of the applications. So because the back end is also completely or is deployed by RPMs, if we switch only the front end, then the front end would be deployed different and the back end different. That is one thing that we want to avoid. Yeah, in general, switching to containers for production is not probably nothing what we will do the next year. So it was actually we thought about it and we talked about it and researched in this area. So yeah, it's nothing what we plan at the moment. Following up on that question, you showed this blog post with the post mortem of some issues after deploying to buildopensusu.org. I thought you also had a test instance. So do you not always deploy to the test instance first, see whether that works and then deploy to build OpenSuspa directly from Git to production? Yeah, so we have a test instance, but we do not use it for everything. So no. It's also sometimes deployment goes wrong. You deploy code, but you just realize it if you really use it. So for instance, we had one issue that the APM Linux didn't work again, didn't work because we removed the route or we changed the route. And it was not covered by a test. And we only realized it when we deployed to production and people started using it. And that you also do not realize if you deploy to the test instance because then you need to deploy to the test instance and then you need to use it. And that is just too much work what you want to do before every deployment. You cannot deploy the test instance, build a package and then deploy to production because then it takes one hour deployment or something. So we have a test instance. We also want, yeah, use it more often, but at the moment we do not use it for everything or not that we test every commit on the test instance before. Another question. So OBS in summary is a software and also an open source deployment for building software. And you were very explicitly were talking about using Travis for actually building your own Git commits. So have you thought about integrating OBS further from GitHub or other services so that it could be used with showing built feedback also from OBS and not just triggering OBS built? No, we haven't researched in this area, but yeah, it's interesting point. But I think it's still something different continuous integration and the building. So it's still worth to have both because you also want to avoid to have everything in the build service because then it's also the build service also continuous integration. So yeah, it's the background of that question is if we could cover more use case. Then we might get more hardware support for actually the OBS instance. Yeah. In particular thinking of the non x86 architectures. Okay. Bring that up. So more questions. I think we are also almost done with the time. Okay, It's over. We're going to turn it off.
Tools, processes and procedures used by the OBS team If you ever wondered how the OBS gets developed, this talk will provide some insights into how the OBS development works including tools we use (e.g. depfu, hakiri, codecov) and workflows we follow (e.g. Scrum). OBS developers have also changed a lot in the last time. The OBS frontend team has doubled its size within the last 2 years. We will explain how we brought everyone up to speed with techniques and methodologies such as Scrum and mob / pair programming. The reference installation build.opensuse.org is now also OBS frontend team responsibility. We changed the deployment process by introducing a demolition squad role. Beside build.opensuse.org, we also release OBS regularly and are in charge of quality assurance using e.g. openQA and Kanku. Last but not least, we will cover how you can participate in OBS development, both as a developer and suggesting changes and features.
10.5446/54510 (DOI)
Awesome. So I'm Richard. I'm going to be talking about why I think BTFS is absolutely wonderfully awesome and except when it isn't and how to deal with it when things aren't going perfectly fine. So yeah, I am a shameless BTFS fanboy. You know, you've just heard about all the wonderful things we're doing with transactional updates inside all SUSE distributions, open SUSE, SLEE, CASP, everything. We're using BTFS as the default root file system, mainly for the snapshot and the rollback feature. And there are also features like BTFS send and receive, which I don't want to go into too much detail because this is meant to be a lightning talk, but BTFS send and receive is one of those awesome features that's just lurking away there in the code base that nobody really pays enough attention to. The kind of short answer is you can basically pipe out the entire contents of your data to, you know, to standard output, and then you can receive all of that and pipe it somewhere else. But you can do that on a snapshot level as well. So you can basically do our sync on steroids with your actual block data, transmitting it across your network or whatever, comfortably building things like your own Apple time machine style arrangement, or just a couple of lines of scripting, and it all just works wonderfully. So the upstream wiki article on this covers that, gives lots of examples of scripts and how to use it. It's amazing. And especially recently, there's been an awful lot of work in BTFS for compression, and that's now sort of a fully standardized, fully supported feature in BTFS. You can, you know, just turn it on with a single mount option. You can shove it in your FS tab. But if you have an existing BTFS system, it's not going to retroactively compress everything that you have. So if you've got an old installation, and it's all uncompressed, you know, just mounting it compression will only compress the new files you're putting on the system. But to retroactively compress anything you want, kind of strangely, oh, yeah, kind of strangely, yeah, you can use the defrag command, and that will compress while it's defragging. So two birds in one stone, everything goes a little bit faster. There's three different layers, or three different formats for methods for compressing. We have good old-fashioned Zlib, which, you know, it's incredibly slow, but it's also an incredibly high ratio of compression. So you know, you get an awful lot of storage back for your buck with that. There is LZO, which I think is the default. I can't quite remember, to be honest, you know, which is incredibly fast, but the ratio comparatively is, you know, less. And the reason why I can't remember which one is the default is because Z-standard is the new shiny hotness, which is in tumbleweed since kernel 414. And actually, I think it was back-ported to SLEASE for 12 kernel as well. So I think it's also in SLEASE in loop 15. It's incredibly fast, and it's also an incredibly high ratio. And in fact, the whole thing is scalable. So there's an extra tuning parameter, and you can say, I want this compression method, and I want it, you know, value 3, and there's actually on the BTFS Wiki, there is a table kind of showing from Facebook, because Facebook is using BTFS incredibly heavily, of their metrics where they figured out, like, for them, the sweet spot is like value 3, where like they're compressing everything really, really quickly and getting a really good bang for their buck. And I think that's what they've set the default value to be, because they contributed that. So yeah, so BTFS is absolutely awesome. It's, you know, just, yeah. But it's not perfect. With any B3-based file system, you end up with this, especially with snapshots and all this wonderful stuff, you end up with this lovely complication of, you don't really necessarily know how much space you're using. Or it gets a heck of a lot harder to figure out how much space you're using. As your snapshots are, you know, as you're making more of these snapshots, and your snapshots are just containing the diffs of what's changed, but when you look at the snapshot, you see all of the files, not just the diffed ones. There's basically no way of really calculating accurately all of the disk in use, unless you go into every single snapshot and count every single file, kind of like DU does. But that means, like, DF doesn't do that. So you run DF, and it's just going to look at, like, the current file system and say, you know, the current snapshot is this big. No idea about all those other copies you've got lurking in somewhere. So yeah, it's kind of, you know, kind of like Jenga, all these different pieces of the file system are all stacked on top of each other. And DF can't figure out which is the block that the whole thing would fall apart if it pulled it out. So don't use DF on BTFS, or if you do use it, just expect it to be lying to you. There are three different options in BTFS, because, you know, it's such an awesome file system if you can do something right once, you can do it right three different times. The basic one is, yeah, BTFS file system show, which is like the absolute minimum data, like it just says, you know, you have a file system. It's this big, roughly. BTFS file system DF gives you a layout much more similar to DF, with a little bit of extra information about BTFS metadata. And BTFS file system usage just like dumps out a huge amount of statistics. And I have to be honest, I don't understand what half of them mean, so I don't use that one much. But, you know, BTFS file system DF makes it clear when you're running low on disk space, so if you're using monitoring scripts, you're looking at that, stop counting on DF. If you're using BTFS, use one of these instead. Because you don't want to just keep on piling on your system to the point where, you know, it's completely overloaded and can't even fit this picture on the slide. And you can, you know, if you're not paying attention to your disk space, you know, you can run out of space. Especially on a SUSE distribution where we have snapper installed. And really, quite often, it's not BTFS, it's default. It's not BTFS, it's fault for running out of space. You know, I blame snapper. But the snapper developer, if he's here, Arvin, no, good. He'd be blaming me if he was. But it's got a heck of a lot better in the last few years. So any installation that's sleep 12 or later, so any loop 15 installations, any new tumble weed installation, will not have timeline snapshots enabled by default. So you're not going to constantly just be taking snapshots just for the hell of taking snapshots on your file system. And so, you know, that number gets smaller, therefore, you're carrying less diffs, therefore, you're using less space. And even when you are using that space up, there is now space aware cleanup. It's the default in regular installations. But if you want an old installation, anything later than sort of 2016, go have a look at Arvin's blog. He posted how to turn it on. It's one command. Which is really useful. Yeah. Sorry. My slide deck is broken, so I have to read it this way. Yeah. So if you run out of space with BTFS, BTFS needs a little bit of space in order to be able to delete data. And so, you know, there you have a very simple command to run to effectively reallocate and balance the space. There's a little bit of room left so you can start deleting stuff. So you can then start removing snaphots, just using the standard snap commands. And that will clear up all your free space. Everything will work fine after that. Most of the time. Sooner or later, some file system is going to break. And on BTFS, it has a habit of appearing to be broken more often. Because the data is being checked, BTFS is going to know when your disk is starting writing nonsense data to your system. And BTFS is going to stop mounting that. So you get all these wonderful error messages like your disk is broken or like your file system is broken. It's not normally BTFS's fault. It's normally the disk underneath. So don't panic. Just because it's not mounting doesn't mean it's totally broken. Doesn't mean it's totally beyond repair. And whatever you do, do not run BTFS, FS check, minus, minus repair. It's the worst thing you can possibly do. Because in that case, it effectively ignores whatever the BTW is saying and tries to scan everything around and generally makes a complete pig-zero of it and fucks it up more than whatever was wrong in the first place. So that is the absolute command of last resort. And unfortunately, if you run like BTFS.FS check, or FS check.BTFS, it's the first thing it recommends. Ignore that. Don't do it. Instead run scrub. Running through scrub, we'll check all of that at the highest level possible. 99.9% of the time scrub will fix the problem. Your system will start mounting. Everything's fine. If it happens again very soon after, you're going to realize your disk is breaking. So it's the easy, lightweight, safe way of checking everything. It's totally data safe. You're not risking any data when you're doing it. There's another kind of second option. If the root Btree has got itself corrupted somehow in whatever way, there is always a second Btree lurking on the file system. And you can just mount your system using use backup root. It used to be called BTFS recovery. And that will get the system up and running and actually restore the system to a fully working state. Since I realized those two commands fix almost everything, every issue I've had with BTFS has been fixed by those two commands with one exception. So generally speaking, that's all that we ever need to do. But I used to work in QA. So I've got incredibly bad luck. And sooner or later, you might find something more interesting than that. If that doesn't fix it, you've found something that's bug worthy. Please run BTFS check, not repair, just check. Save the logs and use it to file a bug. Our kernel guys would like to know what the hell happened and how the hell that went horribly wrong. And BTFS restore basically scans through your file system, scans through your disk and data and recovers everything it can to another device. At this point, you found something interesting enough. It's probably a good idea to take a good backup anyway, even if you do manage to fix it in place. Some advice would be to run that. BTFS rescue has a bunch of commands. Right now, it's for these four. These are fixes for in place repair of the common issues that BTFS does occasionally get. These are mostly harmless. They're mostly safe to run. They very, very, very, very, very, very, very could do damage to your system. So it's far safer than the running BTFS check minus, minus repair. So have a look at that. Run it. You know, really, I would run them kind of in this order. The last time I had a system that wasn't booting, it was that one that fixed it. So I've never run that one at all because it didn't exist when that was broken. And this one just takes forever because it's going through all of the chunks and recovering them in a very, very slow period. But I know a friend of mine that had an issue with that. So it's there. It's kind of the last, last resort. And if that doesn't fix it, then just pray because the only choice you have left is backing up again if you haven't done it earlier and then maybe think about running BTFS check repair. Then it might help possibly, or at least if it's broken, it'll be really broken and you'll feel better about it. And that's it. Thank you very much. Thank you. Thank you. No, that's what they use. Thank you. So the question was a case of the, what's it say, the route, the BT, the B3 route having an invalid checksum, the super block having an invalid checksum. So that will be fixed most of the time by the use backup route. So the use backup route will try and mount the system by searching through the file system to find that backup copy of its, yeah, of its route, which will fix it most of the time. One of the four BT, BTFS rescue commands does fix that weird edge case where the first, the route is so broken that it can't find the reference to the second route and then it doesn't know what to do anymore. So that's, yeah, that's, I think that's a chunk of cover. That's the one that takes forever at the end. But yeah, one of those, one of those four rescue commands should fix it. And if it doesn't, you should have already taken your nice log and, you know, the BTFS developers will add another rescue command, you know, those basically whenever there is an edge case, you end up with a new BTFS rescue command that, you know, they've made, you know, right now there's four. Six months ago there was three. It's not that bad. Any other questions? Yeah, cool. Yeah, what's my take on Red Hat, supporting ButterFS? Well they've stopped supporting ButterFS because they didn't know how to develop on it. And now they're trying to do all of those features in ZFS, also XFS, and they're going to hit all of the same complications and confusions that we have in BTFS. Like these, most of these are the nature of the beast when you're trying to build a file system that's also a volume manager that can do all this other stuff. So I don't, you know, it's their decision. I think there's a perfectly good option that they should have stuck with. Yes? No, I would not recommend a separate home position with XFS, but you know, I do, yeah. I would recommend having everything in a single large BTFS partition using subvolumes. That's my way of doing it myself. I just take my big disk, I install everything on it. And if I have secondary disks, I might end up with a different file system, but generally I'm BTFS all of the way everywhere. Yep, Andreas. The BTFS send and receive, I haven't done one for ages. I just copy it to a different machine. That's an awesome command. It really is. Yeah, so yeah, I avoid reinstallations as much as I can. Unfortunately with the recent changes I made with Va, you know, I've had to do that like once or twice, but yeah, because we changed the subvolume layout, but that's another topic. And I'm already over time. So good. Thank you very much. Thank you.
How to fix a broken btrfs filesystem I love btrfs, I think btrfs is the best filesystem ever. But like all software, it's not absolutely 100% perfect all of the time. This lightning talk will help tell you what to do when it all goes wrong :)
10.5446/54511 (DOI)
So, hello. My name is Jan Wubicka and I'm working on GCC for SUSE. And so this talk is going to be a bit about GCC. And it's a joint talk with Martin Lyszka, which is here. So in about 10 minutes before the end of the talk, he will punch me and start speaking instead of me. So don't be surprised. Let's prepare. Okay. So I would like to say something about link time optimizations. And I would like to try to convince you that it's an interesting thing to try out. And so in the first part of the talk, I will simply explain what the link time optimization is. And then I will spend some time showing you some benchmarks. And then we will try to discuss, you know, if the open source can be one of the first distributions which are built by LTO. Maybe the first one, because I don't know about the other one. So let's start about the link time optimization. This is the usual compilation model of the C compiler, which is starting from 70s. And in this model, you know, you run the compiler on every single source file. You produce the object files which are containing the final binary or final assembly output. And then you use the linker, which just glues it together and you get your binary, which is cool because it's fast and you can distribute the build process. But it's also limiting the compiler in the quantity of optimizations it can do because it doesn't know what the other objects are doing. It only sees the part of the program. So the link time optimization is something which is being introduced since 80s. And it means that you compile the source files into the object files. But this time the object file is not containing the final assembly, but it contains intermediate language. So that's what the IAL stands for. And then, you know, these kind of fake object files, you know, they are no longer real object files, they are put into the linker. And the linker doesn't know what to do with it because it doesn't understand them. So that's why there is a linker plugin, so the LTO plugin. The LTO plugin tells the linker that the object files are actually used by the compiler and it dispatches back to the compiler, which is the link time compiler. The link time compiler takes all the object files at once and it produces the final binary, which is fed back to the linker. The linker pretends it did all the job itself. So this is the basic scheme. And you can see that the LTO is kind of a change into the wall two-chain. It's not only changing the compiler, but you need to change also the linker and the R and all the other tools which are holding the object files because all of them has to understand that from now on the object files can be the real object files, but they also can be the fake object files. And why we would we do this? It's a lot of work. And basically the reason why we do that is to get a better code quality. So if you have the link time optimizer, you know much more than you know on the compiler time. So the first important thing is that the linker tells you which objects, which symbols are used only by the code you see and which symbols are used by shared libraries or binded somehow externally. So basically you can optimize a lot more because you can somehow pretend that most of the functions in the program are static and you can change them. And also you can do the cross-module inlining, which is good because normally you have to put a lot of code into your headrests that makes the programs more ugly to read and longer to compile. And this way you can do it somehow transparently behind the user's back. And also the unreachable code removal is quite important because if you see the whole program you see that not everything is being used. And there are some other more things like the exception handling optimization, which means that you can show in C++ programs that a lot of functions are not drawing and you can remove a lot of exception handling information and cleanups. And you can also follow the identical code and you can optimize for the code layout. So this is the basic list of the optimizations which you can do. And there are also problems. So one of the problems is that you need to change the world to chain. The other problem is that the compilers are much slower than linkers because they do much more work. And each time you change a single file, you have to do all the compilation work again, which takes a lot of time. And the next thing, which is problem especially for us, is that the back reports becomes harder because if your program doesn't work, you cannot just take the single object file and source code and send it to the backzilla. You know, basically you got the back reports like Linux kernel is broken if I compile this version with this version of GCC. And I don't know how to reduce it for the... So this is quite a hell. And it's an important problem. And also it's not completely transparent to the user. So most of the time you can add LTO into your command line and you get the LTO when it is done. But it doesn't work in complex scenarios. Like if you do the Linux kernel, you need to do some extra work to actually get LTO working. So this is the quick review of what LTO is. And I will speak a little bit more about how GCC has become the link-time optimizing compiler. So there is actually quite a long history of the LTO work. The LTO work started in 90s. And basically in 90s, the GCC was a compiler which was organized in the way that it was compiling every statement from the source code into the intermediate language. And then as soon as possible, it was producing the assembly, which was necessary in this because in the 80s, there was not enough memory to hold the program or even the single compilation unit in memory. So at that time, it makes sense. But it didn't make sense at the back of 90s. So from then, we started to work on the high-level optimizations. So it was done by different companies. Like the new inliner was contributed by Kosovo Saray, the unit at the time. I remember it was done by SUSE because I started it, I think, on the first open SUSE, or SUSE Labs conference because I was bored during the talks. And there was a new high-level optimization framework, which was started in 2005. And in 2010, it was basically done. So the basic LTO framework was on the place. And it was able to compile some programs. But it was slow. And that was solved in 2011 by adding a parallelization model for LTO. So since 2011, we are basically able to build Firefox in the reasonable time. Like on my machine, it's about six minutes of linking time. And I don't know, six or seven gigs of memory, which is a lot, but it's also not that much. You need 10 gigs of memory to build Firefox in my setup anyway. So how that works. So this is the traditional link time optimization model. And the main problem is that most of the work is done in the link time compiler. The link time compiler can take a lot of time. On Firefox, on my machine, it takes about 40 minutes to finish its job, which is very boring. So what GCC does is that it's actually adding a whole program analysis path, which is the only path which is done in serial. And once we are done with the whole program analysis, we are splitting up the program again into partitions. And every partition is compiled independently. So the compilation times are much faster because we are able to use multiple CPUs. And theoretically, I saw able to distribute the build, but we don't do that at the moment. So this is slightly more complicated setup, but it gets you pretty much all the benefits of LTO at more reasonable cost. So this is how the story continues. So since 2012, we have the framework, which is able to compile big programs. But it still needed a lot of work. And basically, the reason why I'm speaking here today is that in 2018, which is this year, we finished the debug info. So you can finally debug the output of the LTO compiler in a reasonable quality. So it's comparable to the experience of debugging optimized code without LTO, which is kind of important if you want to declare the compiler to be production ready. So it was 18 years of work. And I will show you how that pays back. So this is the kind of basic overview of how GCC works. Now GCC is containing the parser, and then it's containing a lot of optimization passes. There are about 300 of them, so I wasn't able to fit all of them on the slide. But I did fit a good part of it. And this is the split of the GCC compilation process. So the first bar, the light green one, that happens on the compile time. So that's relatively cheap because each time you change a single file, you don't need to redo all the parsing and optimization of the other source files. And also you can do it in parallel. Usually we built with the parallel make. And one of the design goals was to put as many as optimization as you can into this early pass. So there is a kind of a set of the early optimizations, which is doing the things which are kind of obvious or simple. So it's kind of close to what you do when you have a high-quality JIT compiler like in Firefox. So we do inlining, we do constant propagation, we do all this kind of standard optimizations which are win-win. They don't get the code to be worse. And at the end of this, we stream out the object files. And once the linker calls us back by the linker plugin, we start with this serial part, which is the orange one. And we read all the program at once into the memory. But we don't read everything. We read all the kind of summaries of what we have written on the compile time. And on these summaries, we perform the interprocedural optimizations. So we do the difficult decisions like where to inline or how to clone the functions. And we don't do the actual work. We only made the decisions. And we partition the program. We stream it out, which is here. And then the compilation part happens, which is parallel again. This is where you do most of the busy work. So there are kind of all the high-level optimizations of loop optimizations and all the kind of more difficult optimizations, which doesn't need to necessarily win. And we also have to redo most of the early optimizations because the program has changed by inlining. But the purpose of the early optimization says that the program gets a little smaller and it's also easier to optimize. So the serial part, the orange part, sees the program in the more elastic way than it would if the optimization didn't happen. So in the traditional link time optimizer, the early optimizations doesn't need to exist. And usually, everything is done in the serial part. So the difference in the GCC and the traditional model is this additional split of the compilation process. Okay. So I would like to speak a little bit about how that pays back in the performance. Let's see how I will do with the time. So this is a spec benchmark suite, which is kind of the standard way how you test the compiler performance. So every compiler developer knows what the spec is. There is a big committee choosing the benchmarks for it and the benchmark is supposed to be somewhat representative for the system performance. And the 0 means the same performance as GCC6. And the numbers are in percent, you know, the speedup. So the bigger is better. So it's not completely honest because I should have started on 0 and made it 100 percent, but then you would see nothing on the bars because I wouldn't be able to show you there is a 1.5 percent difference in the performance. So mind that the bars are not very realistic, you know, if they go up, but the performance doesn't go up that much. So the first part of the story is that, you know, if you want to get your program faster, you might try to update the compiler. And we do get better over the time, but the progress is relatively small because we are optimizing for the similar set of benchmarks for 20 years and we couldn't optimize for 15 percent every year because the programs would be too fast. And so, and there is a green is the generic tuning and the orange is a tuning for a specific CPU, which in this case was a Ryzen. So you can see that the CPU tuning has improved because the Ryzen got into the market, but the generic tuning didn't improve that much. Okay, so this is the other thing you can do, you know, you can decide that you will use the most aggressive optimization. So we have the fast option, which allows GCC to get the bigger code, but it also allows GCC to produce some operations which are not completely correct, like assume that all the numbers are numbers and floating point and not the numbers. And here, you know, you can see that we used to improve by about 2 percent over the baseline, which means that, you know, the benchmark is really hard to optimize. It's supposed to be kind of system benchmark is memory bound, but we have improved a lot more in the GCC8. But if you look on the distribution of the single benchmarks, there is only one benchmark which improves a lot. And that's benchmark, which is called hammer, and it's optimized by very basic trick of interchanging two loops. So sometimes, you know, you can see the big jump, but it's coming from only single benchmark, which is somehow not representative in the geometric average. Okay, so this is what you can expect from changing a compilation flex. You can also say that maybe, you know, GCC is too old and you can use different compiler. So this is the clunk and ICC. Now, it's not completely fair to ICC because ICC doesn't tune for Zen, but you can see that the CPU tonic is pretty small. So it's, I think it's representative enough. And again, you know, you can see, you know, how it compares to the GCC8. So basically, you can see that most of the bars are coming below zero. So GCC is actually doing pretty well, you know, compared to the ICC benchmark as well, which is quite good because ICC is, you know, one of the reasons for ICC to exist is to get the spec numbers to be good. Okay, so now you can try the LTO. And here you can see that the LTO is adding something like 2% extra of the performance. So it's basically the same as switching on all fast on the older compilers. And that's coming without sacrificing the code size and without sacrificing the precision. And just to show how that works for other compilers, this is how the LTO compares to the non-LTO. So you can see it's fairly, fairly distributed. So you can expect that your average programmer speed up a bit. And overall, by something like 1.5%. And this is what you can expect from the other compilers. So the clunk is also getting some benefits from the LTO, which is comparable to GCC LTO compared to its baseline. And ICC is getting a lot more. The reason is that the ICC has really a big team working on specific optimizations for this specific benchmark. So we know some of these optimizations that are coming from. Like in the Hammer, what they do is that they change the memory representation of the matrix. So instead of changing the loops, you change the memory representation, which is something that GCC simply doesn't go because it's somehow considered to be specific to the benchmarking tricks. We don't really know if we can do it in the way it would be reasonable for real-world programs. So but the conclusion definitely is that while we don't have much space to grow for the Parfail Compilation project, because we are pretty much on the state of the art, there is still space to grow for LTO Compilation. So this is another thing which you can do to help your performance, and that's to use the profile feedback. So I'm not sure how many of you know what profile feedback is, but basically you can use the option Profile Generate to GCC, and then you can run your application. And GCC can use the data which are collected to optimize the application better. Profile feedback is kind of orthogonal to LTO because in LTO, the compiler has a lot of options to do, but it doesn't know what to do because it doesn't understand the program very well. If you have the profile feedback, the profile feedback tells you which parts of the program are important, how many times the loops are iterating, which functions to inline. So together you can get pretty big speedups, and the speedups can be pretty real. If you look on the parallel bench park, which is parallel, of course, you can get something between 17 to 27, 23 percent improvement, which is noticeable in GCC. You get something like 7 percent improvement. So that really translates to numbers, which relatively matters. And you can see that the LTO and FDO together is getting something like 7 percent improvement overall, which is pretty large compared to something like 1.5 percent or 2 percent for LTO or FDO alone. So if you compile these two optimizations together, you get much better results than if you do just one alone. And this is a quick slide on the code size. So you can see that in GCC, the LTO is actually decreasing the code size, which is one of the goals, if you want to build your system, you want to make the system smaller, not bigger. That's different with ICC or Clank. The LTO is increasing the code size because maybe it's not tuned for this goal. And also you can see that the profile feedback is getting the binary slightly smaller than result. So this is how things look like on the benchmarks. If you look on the real programs, the situations are different because the real programs are much bigger than benchmarks. The Firefox is a lot bigger than the biggest program in the spec test suite because GCC in the spec test suite is quite big, but it's old, so it's relatively small. And these numbers were collected by the Firefox people because they tried the LTO in their official benchmarking server. They have pretty cool benchmarking architecture, and you can see that they measured that you can improve responsiveness of the page rendering by almost 20% or 30% without the profile feedback. And also some of the other important benchmarks improve noticeably like Dromaio is a JavaScript benchmark, which was tuned by them for a long time. And top painting, there is a startup time on the very end. So a lot of real world benchmarks are improving by this optimization. And like they have this way of testing the performance, so the orange is the benchmark baseline of Firefox, and the violet, or I don't know, blue dots in the guide. So the blue dots are the various experiments that someone has tried to do a benchmark. And here you can see the benchmark for enabling LTO. So basically in one year of the Firefox development, this was the most successful performance improvement they tried. But they are still brave enough to enable it by default, which we will do hopefully. And this is also kind of interesting to me. That's the responsiveness test. So you can see it's much more noisy. And if you see the history, you can see that the improvement is pretty good. It's a time, so going down is better than going up. But in the big scale, they were able to get similar improvements. Otherwise. Okay. And this is just to quickly summarize how the code size works. So of course, that's something which is not important for specs, because the programs are relatively small, but Firefox it is. So you can see that the binaries are slightly getting smaller over the GCC releases, because we are looking into that. And you can also see that the different optimization levels are making significantly different binaries. So the O3 is very large. And OS is about the half of the code size of the O3. And you can see that the FDO, the profile feedback is making the binary significantly smaller, because a huge part of the Firefox is actually that. It's not being tested by the profiler. So it's not optimized for speed. And this is how it goes with the LTO. So the LTO gets really quite big improvements on the size. And basically it goes one step down. So the O3 with LTO is faster than O3 without LTO, but it's also smaller than O2. So the binaries are really improving in the size. And this is just to see how Clunk works. So they have O2 comparable to O2, but the O3 is smaller, and the profile feedback is again not optimizing for the size. And the LTO is also not optimizing for the size. It's the same story as for specs. So that's pretty much for the performance part. So to summarize, the LTO now works, and it can build big programs like Firefox. The library office is built by default with LTO in the open source now. So that's why I'm projecting from Acrobat, because I'm not sure it will not crash during the presentation. And it's very, I think it's pretty successful size optimization, because you can almost always see the size improvements. And the performance improvements, it really depends on the type of application, if they matter to you or not. But often they also do. And there's a lot of space for the future improvements, because you can improve GCC, but you can also optimize applications for LTO. So if we enable it, slowly things will move to the overall better performance. And that's the end of my part. And Martin will tell you how that works with the open source factory. So I have a couple of slides about LTO and factory. What we did, we basically took a normal staging project. We modified project config, where we edit dash flto to OPDFlex. Now that recently we added position independent execution by default in the distribution, which means these options should be really used by every single package. The number of failures is quite surprising. It's only 80 packages of more than 2000. The packages consist of all the KDE GNOME and base system. And to be able to test the staging project in OpenQA, we had to basically disable LTO for packages which fail. And then we are able to get distribution, which is close to LTO distribution. So next step was to boot the ISO image, which we got in KVM. And I basically find all the ELF executables and chart libraries, which is close to 7000. And the total size of these files reduced from almost two gigs by about 5%. Note that this is also including packages which were built without LTO. So the real number should be better, I guess. And I have a couple of examples. The first one is the main library of LibreOffice, which reduced by 16%, which is quite significant. And we have also examples of some MySQL binaries, which reduced really significantly, but it's due to usage of just the limited amount of code. So we were able to boot it in OpenQA, and it was able to success the tests, except some fallout, which was quite small, I guess. And there are some issues, the packages which failed for various reasons. And I will go through the issues we've seen. The first two are some limitations in GCC. The first one was a bailout when we basically rejected two symbols being defined or being prevailing in two libraries, which is a valid situation if you use no command option. The second was a real miscompilation where we decided to merge two declarations of functions. One was having attribute no return. And the two declarations had the same assembly name, so it was a real issue. The third one, it's the reason why most of the shared libraries failed. It's a symbol versioning for shared libraries where you can have versions of interface, of functions which you export. And it allows you to run executable, which is dynamically linked with a newer version of library, but still using some old interface, which is quite a nice feature, but we will have to add a new function attribute for next GCC release. Yeah, static libraries. So in general, we should not ship static libraries in other packages. There are obvious exceptions like some error recovery tools for file systems, for instance. And what we have to do, we probably have to enable so-called FFET LTO objects, which are object files which consist of both assembly language and LTO IL. And at the end of build of a package, we basically have to strip all the LTO byte code and we have to verify that we do not ship it. What's good about it, it's that even if you have a package which you want to be built with LTO and is linked with a static library, it's possible because LTO can transparently mix LTO objects and assembly objects. So yeah. We have some special LTO warnings done by Onza. The first two examples, this is issue, the first one I looked inside and it's issue where you have a structure being defined in a header file and it has conditional fields based on some macro. And if you forget to include, for instance, config header file in a translation unit, then you end up with the two translation units having different layout of a structure which can cause failures because of the size of it's different than the binary. So the layout in memory is different. Then we have some legacy configure scripts which do gripping of object files which is not possible. If you use LTO, you have LTO IL in the object file so you can't grab for a format of floating point for instance. So it's quite rare I would say. Then we have this tool, DWZ. It's a Dwarf compression tool which is being developed by Jakub Jienek from Red Hat and it looks he's not having enough time to enhance it to fully support LTO so it's definitely what we need to work on. And maybe we'll see some higher memory constraints for couple of packages like LibreOffice and etc. We'll see. And last issue I have, it's quite similar to symbol versioning. It's usage of the level assembly. As you can see in the example, there is a function being defined in assembly which is just string for GCC and the LTO failure which you can see on the bottom is caused by a caller of the function is in a different translation unit in LTO so that it can't find a symbol. It's quite easy to fix. You basically add no LTO to the translation units which use these top level symbols. This is last slide of the talk actually. It presents a histogram of text segments of packages and it basically tells the most of the packages are smaller and few of them are bigger. The biggest size improvement is seen on executables and the smaller is on shared libraries which provides quite some exported symbols. And conclusion of the talk is basically whether we want to have it in open-sUSE factory being enabled by default. We hope LTO and GCC is mature enough to do it. As I mentioned, we can be the first distribution which eventually will maybe one day appear in SLEE as well. Thank you. Do you have any questions? So if you create the profile for your application, for compilation, how probable is that it won't work when you start using newer version of GCC? At the moment, the profile is specific to the configuration and to the GCC. So the idea is that we have to train it during the package build. The way it works is that in many cases, like in the Python or parallel, we can simply run the test suite or do something like that. So in the Firefox case, it's slightly more challenging. So what they do is that they simply start the Firefox, they cycle through some pages, and they have a make file machinery for that. So you need to open your web server which can be VNC, so you don't need to really see it, and that's how the training, each time you build it. Okay, thank you. So you built a whole distribution with LTO. So how much longer it takes now? Actually, I haven't measured it, but there will be some increase, but I would expect just a small one. So the overhead would be quite small, I guess. No, it depends, of course, on the package, but if I remember correctly, it was something like 16 percent for LibreOffice. So that's, you know, bigger packages are harder, the smaller packages are easier, so I would say the average should be better. But for example, GCC itself is terrible because it has static library which is called libbackant, which is very big, and it links it into every single language it supports. So the GCC bootstrap gets much slower without LTO. So it really depends on how the program is structured. Two questions. Have you done any testing on architectures other than x8664? And since you mentioned this, leave for the future, does this impact our ability to do live patching? So the first question is no, we just, I just used OpenQA for x8664. And the second question, we probably don't want to have it in Linux kernel. Currently you need a huge page set on top of Linux kernel to be able to build with LTO, and it's not upstreamed. It's under cleaners working on that, and he was rejected couple of times by Linux to merge your page set. So actually, yeah, I know the LTO support in Linux kernel is interesting because for us, because it really trains a lot of strange cases, which GCC needs to deal. But on the other hand, you know, the benefits are not as large because it's heavily hand-optimized to the carefully handled code base. Now the benefits are much bigger on things like LibreOffice, which is huge and has much more abstraction penalty. So yeah, you know, there are some packages like G-Lypsey, which I don't expect we want to have LTO at all. And also, you know, for the x86 question, we didn't do the open-sus test, but of course the performance and spec benchmarking is done by ARM and IBM, so they are also tracking the LTO performance. The GCC code for all those LTO handling is that generic code or are there architecture specific bits so that, like, say, power might be, the benefits might not be as large as now for Intel. Yeah, no, technically, the only architecture specific part is how you pick it into the object files. So we need to understand the object files. So there's a different support for Mac, different for Windows, and different for Alf, but besides that, it's a generic. So it goes for all the targets. What is the increase in the, due to this IEL code being added to the individual object files, would that be noticeable in, you know, some packages having constraints for the disk sizes in OBS? Yeah, they are generally bigger. Yeah, they are not 10 times bigger, but they are somehow bigger than the usual object files because they contain more information. But on the other hand, we are compressing them, so they are compressed, so they take, yeah. And also, yeah, that's something we are working on. You know, we are trying to reduce the quantity of information we are streaming because it's an important performance bottleneck. So yeah. I don't remember the numbers, but it's not twice. It's, you know, some percent bigger. So I guess it's time for beers. Thank you. Thank you.
Link time optimization (LTO) extends scope of compiler optimizations to whole program or DSO. We present some data on pros & cons of using LTO to build openSUSE distribution by default. This is joint with with Martin Liška and Martin Jambor.
10.5446/54513 (DOI)
test test one two one two and now you can hear me as well but you heard me without a microphone as well so it's just for recording yeah it's cool so thank you very much let's start for joining and attending this presentation this time it's kind of in follow up what we did before before I did a rough explanation what's that is and now the follow up is how to manage the whole stuff I would like to talk about this mentioned dashboard same thing as before if you interested in just scan the QR code you would directly redirect it to the link it's hosted on github written and reveal because someone asked me before it's written in reveal.js and it scales down to any device I could do a separate presentation afterwards about this this is amazing thing so let's start today we would like to talk about the mentioned dashboard and I say we I mean my lovely colleague Laura, Padoano and myself I'm Kai and yeah just the content for today another quick introduction just let you know who we are then some kind of history to get a better understanding about the background where we're coming from therefore we have to talk about open attic and yes it has nothing to do with it's not a specific part of a house it's a different discussion someone in Germany tried to find a cool name for storage solution that could be complicated so seems like German folks are not that good in finding English storage solution names that's how we ended up with an open attic second thing is the dashboard B1 finally we're switching to dashboard B2 and if the Wi-Fi will work and I hope so we'll have a live demo prepared and more people joining welcome nice to have you so let's start with a quick introduction I think I talked about myself already name is clear same thing like before you can find me on OFTC in the separate different channels we both are from Fulda or near near B Fulda it's in the middle of Hesia middle of Germany somehow hidden our date of birth amazing and once I updated the slide or those slides I figured out maybe it's not the best idea to write the hours down because then I figured out oh man think I have the feeling I'm getting old because the hours are so many already so maybe I will change this so this is enough from an introduction and obviously we both are from Susie quick history to get a better understanding and for the history as I said would like to start with the open attic cool name what is it someone heard it maybe before open attic was founded in 2011 so quite some while ago and initially open attic looked like that and what it was made for it was made for managing just the local appliance storage piece of hardware to create some LVM shares on top share it via NFS ice guzzy sifts also fiber channel I was proud of that one fiber channel was possible and you could create snapshots and all those things but just on one on a single instance later on we ported and you were able to add another storage and we supported DRBD as well you could create the RBE shares within the UI that's how the whole thing started so I would say a single unified web UI solution to storage to manage just one piece afterwards roughly two and a half three years later 2014 we added the initial theft support because we figured out sounds like stuff is a cool project maybe we should also do something in the UI as we have it already and the first thing we've added was that what is that that's a visualization of the crush map and if you've attended my talk beforehand you know the crush map is the topology of the cluster and you can build it that's cool and we thought it was cool and it's still in the current upstream version of OpenMatic but we modified just one thing and the initial version is was this was possible to edit this crush tree and to add new rules to the crush map and you could also track and drop every entry into in this three and then you could click on apply and that's it no further notification nothing just we thought it's that's cool to modify the crush map but we figured out that people customers who ever will use it and they will change something click on apply and then they're kind of screwed and then they call you why the hell is my cluster stuck and I can't use it anymore and it's busy almost 100% that's when we realized okay we make it the read only mode we removed the edit functionality and that's how we end up today what we currently have in mind is to rework it or redesign it completely to have more or less like a result-based solution where it could create new rules and then you get informed how many data will be reshuffled for example if you apply those rules to your cluster are you really really sure please I don't know kind of contact your administrator your boss and someone else before you applied it after that two years later the initial collaboration started with Susie and the Suze once we started that we added some initial graphing visualization of the staff cluster some really basic stuff and only a few months later in November obviously we were the first acquisition ever made by Susie I'm still not 100% sure if they've just tested us for a few months and if this was just a trial so okay we give them some money we test them is the team really good idea or should we buy them or not and then in November finally they didn't bought us that's not legal in Germany they bought the project and usually I don't know surprisingly we are now part of Susie I know how this belongs here's a picture of the people Laura was already there you are almost there from the beginning right so joint I don't know few months after we initially start in 2011 and then in 2017 we're focusing on several only since open attic version 3.x we focused on several only the reasons are really easy because beforehand we supported multi distros like Debian Ubuntu sent to us we bought several packages and added all the functionalities now being part of a bigger company with some kind of an roadmap with a product behind yeah the problem is we were still just nine nine ten developers at least and supporting all those different distributions plus adding the whole features they told us yeah wasn't doable that's why we traded off the multi distro support and also the local storage management so we've removed the whole part so there's nothing left in the current version anymore that's how it ended up so the current a current open attic version 3.x looks like that this already includes Grafana into the UI and the backend or the data that Grafana's gathering is stored in a time series database called Prometheus and that's completely automated automatically deployed via deep-sea if you use the salt-based solution that we have this whole stuff is set up automatically so we talked about OpenEctic that's a rough history now we're talking about dashboard B1 that's another thing to understand the whole full picture initially with Stef Luminous there was the so-called Stef manager dashboard added and we call it the dashboard B1 you were able to see some the Stef hell some initial log some performance counters also a list of OSDs which was quite nice as well as your images mirroring all the stuff back end was written in Python and use cherry pie and the front end was written in Rivet.js that's how the whole thing started but this yeah just a picture how it looks like or it looked like it wasn't remember that it's important later it wasn't black just keep that in mind I said it so I will come to back to that later it's in black that's how it looks like and after Luminous was released they started to adding new features ArchEW details and monitoring with some perf counters as well as a browser to browse all the configurations available within the self cluster which was quite nifty the problem was dashboard B1 has some limitations first of all it was a read or still was a read only dashboard so you couldn't manage anything you could take a look at something but if you would like to create an RBD device share something that wasn't possible no built-in authentication neither at least in username or password doesn't matter nothing everyone could just connect to it and from our perspective a limited functionality was rivet.js that's our point of view that's how we ended up with the next one and I would like to hand over to Laura thanks so this is about the dashboard V2 so it started in January 2018 that there was a general discussion about the old dashboard and the future of that dashboard and our idea was to contribute the part that was already in OpenEthic as a new dashboard so dashboard V2 and we discussed that with Sage and John and so we decided to create a proof of concept and yeah then we decided to use Angular the other dashboard used another technology but since we already used Angular for OpenEthic we decided to to go with Angular as well and yeah at the end of February around there we created a development branch for the dashboard V2 where we kind of migrated all of our work from the OpenEthic from the old OpenEthic to that development branch so we could create a pull request against Ceph with a new dashboard replacement and yeah on March 6th we our yeah our version of the dashboard was merged and there we had the feature parity with the dashboard V1 and yeah there we submitted over 150 pull requests and 122 were merged and since we've merged that first version of our dashboard we already submitted over 170 pull requests and more than 140 pull requests have been merged and yeah currently there's a lot of groundwork going on we have to build the foundation for the for the new dashboard and here's an overview of the dashboard what's already there and what we use in the back end we have Python and we use CherryPy and as I mentioned we use Angular 5 for the web UI and yeah because we already used it in OpenEthic so we are really familiar with that and currently we have username and password for the login and yeah and as mentioned we have the features that are in dashboard V1 already in our dashboard for the master of Ceph and yeah there are additional management features integrated but also some we are still working on so what we have in the back end what we built on top of the dashboard V1 version is a task management we have a BrowseableRest API but that is also currently being changed again with a new replacement we have RBD management we have RGW bucket and user management and Ceph pull management that is also in the front end for example the Ceph pull management and the RBD management we have the RGW management in the front end ErasureCodedPool profile management and the task manager as well you have just you have a window where you could see running tasks because if a task takes longer then yeah just creating something and you can instantly see it then you have a small pop-up window where you can see the progress of that task so just to inform the user that is something that takes longer and is going on we have tooltips so if you go over with a mouse over some some items you get additional information about it and we also have a usage bar component to show some some graphs in the UI and what's next is the Grafana proxy that is something for the which is being developed for the back end so we can then integrate Grafana into the front end still Ceph pull management is in progress so we can edit pools for example what is also in progress is for the back end currently cluster wide Ceph OSD management so we can set set flags for OSD which are then applied cluster wide we have a settings editor there's also work in progress that is a page where you can see all settings from the cluster that are active currently the plan is to have that as an editor so you can change Ceph configuration settings in that page and also translation localization is a topic so we have the dashboard in different languages currently Spanish is being it is being translated to Spanish yeah we have also German and yeah if you or if someone wants to contribute with localization that is also something we appreciate and what's also next is user permissions so you can set user permissions for example for add actions or edit actions if you want to create something or if you want to delete something so we wanted to the idea was to have that per page so you can allow a user to go to the pools page and create a pool or you can just unset that and the user is not really allowed to do something on that page except for yeah kind of read only that is also something that is being worked on so I don't know if we have luck with the yeah with the Wi-Fi this is one thing maybe worth to mention is right now what we're trying to achieve is we try to convert or to reach future parity with the old open medic so all the functionalities that we have had to mention functionalities we would try it we right now we try to port them into the new upstream included dashboard there are some problems that we currently facing because we have to choose a orchestrator somehow or make it more or less modular that we can support multiple deployment tools like Seth Ansible various transcripts whatever and deep sea for example salt base our salt base version from Zuzer so to support all of them and that's what we currently trying to achieve and as soon as we achieve that we will add more and more functionality on top but from our end the cool stuff is that as soon as you now install Seth mimic you will get the dashboard by default and there's no third-party dashboard needed anymore and this will definitely improve for Nautilus which is planned to be released by the beginning of first quarter next year so just as a heads up let's take a look of my Wi-Fi's working or not yeah should I press F5 I don't know I just show you the login page because we have one and if the Wi-Fi is not working anymore it's awesome right so this is how what I can show you let me try to log in in my former talks I did it I did the bad joke that our old login screen looked completely the same the only thing that was different was the logo and the name it was welcome to medic instead of what come to Seth and the logo was different and then a former presentations I just showed this as the newest thing that we've developed and that's it so that's cool let's try to log in please please yeah amazing and one thing to mention I mentioned it you know what yay exactly and this already started several conversations I hate those conversations but there are there are several conversations because seems like a lot of people like like the old black color and I know that someone is already working on a seeming PR just to make it possible to change the colors of the yeah you are because why not that's the most important part that we should work on obviously given that we are now based on the old or that we try to reimplement the old dashboard it looks totally similar you have an at least an overview overall health status you get how get a list of how many monitors do you have the quorum how many OSD's list of pools then the usage bar with an over this cool little tooltip that's something that we've added that wasn't part of the old dashboard we want and then something that open attic does not have is those logs so you directly get a cluster log and an audit log from the cluster so you can click on and get direct information from the cluster so this part is already ahead of what open attic is capable of let's switch to the cluster topic or cluster tab there we have several tabs for example the hosts there you have just a list of hosts including the services there's nothing more in it yet this will change as soon as we have the orchestrator and we can talk to the nodes and we can then interact with them and do specific operations on them right now it's just not possible from clusters we have a list of monitors so at least you get in broader general overview then you get a list of monitors you would get those monitors which are not in quorum because they are down or have others several problems you get a list of open sessions I have sex six things connected that's amazing you get their addresses and then you can click for example on a specific one one a and then you get some details performance counters from this one most of them are empty because this is a development system development branch and I just started to be stock cluster so there's nothing behind was D tab we can can list the OSTs all of them then we edit those usage things the read writebyles I know it's quite true if you've seen that they update automatically same for writebyles you get the status of them and underneath you get those OSD map so you get some details from the OSD specifically as well as you can get details for the metadata same for performance counters so you get get some data for this specific OSD ID zero what was collected and then the histogram this was part of dashboard we wanted was implemented so maybe we have to redesign that but as I said we converted it so that's for USD a cool thing that's what we mentioned with the configuration browser and I hope my wife is still working seems like seems like not I don't know at least I wanted to show you if it's not possible I will explain what it does it will list unknown error that's amazing it will list all configuration items was in your cluster and the cool thing is you can't can change the level so you have at least a basic level and advanced in the development level so in the basic level you get I don't know if you select an OSD you get four different different variables you can change for values if you change to advance you get I don't know 50 and development 100 so that's the idea and what we're currently working on is the editor of this configuration list that we have right here but plus we would like to show the default how the default looks like and then mark those you have changed manually mark them red or highlight them that you know okay I changed those variables and maybe because of given reasons and then you can also adapt and change that's cool for life deal with us always the case right seems like it's not not I had Wi-Fi once I still have oh my VPN dropped because of reasons let me give it a try to reconnect don't we have someone here from from Susan Infra who can fix the just kidding let me see if I can I can write let me create an internal ticket yeah that's possible please send me an email that the exchange or whatever server is down yes of course it's a great idea chicken and egg it's connecting do I have do I have Wi-Fi here or do I have at least you have let me let me let me create a hotspot we are flexible right we are kind of agile so I create a hotspot I don't care let me change let me check and then now I have edge wire that would be amazing let me connect and let me check yeah my hotspot is working round of applause is amazing let me check if it's still if it's working or if it's not and there is the configuration editor or no just browser sorry I'm back thank you mobile device in 21st century this how the that's what I try to what I try to explain this is how the general yeah overview of the configuration management looks like let me let me select something for example I click on OSD as I said I had a level of basic I just get two options I could change cluster address public address and then if I'm more serious I can change to advance for example and then I get more details and there's another page you can go to and get more information and there are several ones be sure so if you're really serious you can do a lot of really cool stuff here so let's switch to pool we are listing all the pools that we have within the cluster and also we are listing the details of the pools underneath so how many house the PG nums or how many PG this pool have for example or also some performance counters yeah that's for at least that's for pools one thing I can show what we add from a manage management yeah perspective is within block the block tab we're not able to create RVD images and one thing we've we've added as well is the possibility to create snapshots of those RVD's so if I click on snapshot I here I have an overview of those snapshot I just created a test snap and this snap for example right now is protected and if I want to unprotect it I can click on then you'll see you create a long running tasks a bit longer get a notification and it's unprotected so that's something we've developed and if you want to for example create a new one you can click on add let me new OS C RBD it is cool new OS C RBD 200 max you can also change the features for example and here we have some kind of dependencies included if you have seen open medic before this was there already the fact but for example object map requires exclusive logs so as soon as I remove exclusive log all of the others where are automatically unchecked and as soon as I click on exclusive log I can turn on object map and test this again as well let me create this my hotspot is working quite well and there it is my new OS D RBD for example and now I can create snobby create a snapshot and then it will create a snapshot on top of this RVD that's for example a functionality with that we already added that's ahead of we won. Pulse system now there are more I can show you there's the mirroring the ice-card the pages was already part of V1 but given that I neither configured a mirror nor ice-card I can show you empty lists so as soon as you have I have a mirror configured or an ice-card gateway you have something that given that usually what we do is just checking out the current master and then we starting the restart cluster which that which sets up a bare minimum cluster for development purposes yeah it's just overhead to deploy everything. File system we are able to show the file system so the CephFS on top they can also there you can also list the clients how many which are connected and also the ranks of the client so one thing that's maybe interesting in or you are interested in is as soon as you have more than just one two three clients you maybe want to know who of my clients is the one who is I don't know training down my whole cluster all the time because he's totally freaking out and doing some weird things so there you get a list of reads and writes at least for specific notes. Ten minutes before our talk I figured out that I can show you something else the object gateway but not the object gateway I wanted to show you but it's one thing that we've added and that those are those hints and notes so if something is not working we try to add some hints or some guidelines how to configure it and here you can see if you can read it there you get an information and the direct link to the documentation how to configure the object gateway for example. I just have seen this five minutes before before I start my talk seems like my RGW crashed on my laptop maybe it's not the best system to host a class but just that you know we're trying to make it as easy usable as possible. When you click on the link just that you have seen that click and open a new tab this will directly redirect you for example dashboard plugin how do you configure enabling the object gateway management frontend within documentation so no need to just I don't know walk through the whole documentation use Google or at least read the documentation I know people hate to read documentations. Background tasks just to show you that that's what I did before I created for example a snapshot on top of this new SDRBD called snappy and there you can see all the tasks that I've done and also the recent notifications where I've seen this error we got before those are listed here so we have a clue what was going on. That's a general quick walkthrough as you have seen if someone has seen the Yopimetic UI beforehand what's missing of course is the iSCSI management what's missing is NFS management and the whole pool management for example as Laura pointed out is pull request already created working on it and for the whole management stuff our hope goes to into John that it's done sooner than later so that's how an upstream project works so but to be totally honest I'm really really happy that we were able to convince our internal I would say managers that we can what can push efforts in upstream and that Sage at least allowed us to do so so now we have roughly between 10 11 upstream developers working on the dashboard and what's even better beforehand we tried to build up a community with OpenMatic which was rather complicated because the contributions to OpenMatic were quite low or I would say not existent at all and this completely changed and we initially merged the upstream pull request into set so we now have folks from Red Hat working on the dashboard as well we get new pull requests we get fixes this is totally amazing so I'm really looking forward to I don't know make this UI as usable as possible and therefore we need your feedback so please use it just take a look at it I know their management functionality is almost missing right now but initial feedback is helpful and if it's I want the black theme back something like that just feedback that we know okay what could be improved what is missing maybe just contact us that would be helpful and with that that I think we're at the end and we're kind of ask question right let's switch back to here I don't if someone someone has questions about the demo I would I would not full screen this presentation now again so now it's question time if you have questions ask them so you want people to test it out etc how much of this is in tumbleweed or would you have to then do a get pull from upstream to then try and test the new one or do you have a development repository within OBS that people can take a package from test it and provide feedback currently we have was in the within OBS or we try to be as you're up to date as possible so we have packages most recent packages for luminous and we already have packages for mimic the mimic was branched off two weeks ago or so so we have current current packages or we don't have is up to date up to date right now packages from masters or we don't build masters for every change so we built them regularly I don't know once or twice a week something like that but not every day so if you want to up to date right now then you would have to check out from GitHub more questions so we think we have time left so yeah questions or everyone totally confused or you can ask a general question if you would like otherwise I don't want to see your time we can also wrap up so last chance then thank you very much for attending this presentation and thanks to Laura for helping me with having you
The original Ceph Manager Dashboard that was introduced in Ceph "Luminous" started out as a simple, read-only view into various run-time information and performance data of a Ceph cluster, without authentication or any administrative functionality. However, as it turns out, there is a growing demand for adding more web-based management capabilities, to make it easier for administrators that prefer a WebUI to manage Ceph over the command line. After learning about this, we - the openATTIC team - approached upstream and offered our help to implement the missing functionality. Based on our experiences in developing the Ceph support in openATTIC, we think we have a lot to offer in the form of code and experience in creating a Ceph administration and monitoring UI. We already reached feature parity and replaced the existing dashboard mgr module and we're moving forward. If you want to learn more about it this is the right talk for you.
10.5446/54514 (DOI)
I'm going to talk about a bit of a crazy project I've been doing for last year, which is basically building a cloud from nothing. By the way, apparently I type Prague in wrong. Just like before you. First off, I'm Chris. This is my third open source conference now. I'm a bit of an IT jack of all trades. My original background in university was electronic engineering. So I'm quite happy building circuit boards with small IoT devices and stuff. I also run on the totally alpha end of scale, a open source project called Birgit but monitoring, which is a distributed monitoring platform. I'm kind of happy in all that scale. What I do most of my time, my day job is postgres. So I'm a postgres consultant and I've spoken up a couple of postgres conferences. Although at the moment I'm doing quite a lot of Java as well. My talk is all about the cloud and the hardest way to build one. So first off, I thought I would explain what I mean when I talk about the cloud. It's really easy to think of Amazon, Azure, Google as the predominant public cloud providers. Some people might think about OpenStack for building on-premise clouds. I should have done OpenStack to be honest. But yeah, you learn the hard way. But what I consider a cloud really to be is less about scalability that's often talked about with the public cloud and the defining factor of the cloud is your ability to scale up and down instantaneously. I think it's much more about automation, autonomy and abstraction. So it allows us to create and use resources automatically without human intervention. It gives us the autonomy to manage those resources ourselves without human intervention as well. So we can delegate responsibility for resources created to the owners of those resources. And abstraction, we decouple ourselves from the underlying infrastructure that operates those data centers. Data centers are complex things, especially when you're running at a scale of Amazon, etc. And we don't need to know anything about how the underlying works. And a lot of this talk demonstrates that beer and eBay is not the best combination in the world, especially having a few beers in the evening and then buying hardware on eBay at night because that seems like a good idea to do at the time. And I've worked on a number of bad OpenStack deployments where I've been having to run services on top of OpenStack that's been deployed badly, sadly, running all versions of OpenStack. And I've had the pain of OpenStack's networking component failing and taking down my production services. And I've worked on a number of migrations from physical on-premise environments to the public cloud. So I've also got a fair understanding of the kind of economics of the cloud and the hidden costs that most people don't think about, for example, how much it will cost to send your data to and from availability zones. And as an electronic engineer, I really like hardware. It's a bit of a fetish, really. So kind of appealing buying hardware, playing with it, having my own little pressures. And for the kind of open source projects, I've been working on my other work-related projects and non-work-related projects, it's really became necessary for me to have quite a large-scale test environment. So my monitoring project is a distributed monitoring platform. So I've got lots of different components all talking to each other with things like RabbitMQ going on. And actually to be able to try and make that reliable has proved difficult without actually an adequate test platform. But I'm also trying to do this on a bit of a budget. And I don't want to, I also play around with quite high-performance databases where I want quite serious and dedicated storage and I.O., which is extremely expensive in the public cloud. Hence, kind of why all of these factors accumulated for the rather bad idea of building my own little cloud. But to be honest, what it's really about is curiosity. I've always loved taking things apart, working out how they work, putting them back together again, probably breaking them. And the whole driver for this was really, it was a way to kind of see all the components of a cloud or at least the three key principles, how they work and interact together and how complicated some of the underlying software actually is. And a chance for me to kind of realize and work out how to go about building clouds and better understand how they work and all the infrastructure underneath them. So the main problem with software is it's a bit soft. It needs something to run on. Thankfully, I have a very friendly flatmate who doesn't mind me having servers in the lounge. They make very great coffee tables for a while until it's the summer and it gets a bit hot. But we need to start somewhere, so we're going to need some compute capacity because we want to run probably VMs or at least applications somehow. And like I said, a lot of everything I bought was bought off eBay and it's all kind of five-year-old dish equipment because most corporate entities are refreshing a five-year cycle, so you can pick up quite powerful hardware quite cheaply and in relatively good condition. And the economics of hardware, it's been pushed down to the lowest common denominator really and they're built as cheap as possible and they don't really have much value after they've served their life, so they tend to be cost more to dispose of them, which is why they end up on eBay really cheap. But the other interesting thing is CPU performance hasn't massively grown in the last five years or so, compared to say 10, 15 years ago where we'd see doublings of performance in generations, we now see 10, 20%. So cumensively, CPUs are going more parallel rather than faster and so you've got to trade off between spending lots of money from a modern server that's got a really parallelized CPU, lots of CPU cores, or spending the same amount of money and having 10 boxes slightly older, but for what I wanted, I wanted more redundancy, I wanted more devices, more things talking to each other, more things to go wrong, essentially. I'm kind of more than happy to have stuff that's out of warranty to maintain it myself, to void the warranty completely. For example, I've got switches where I've taken fans out and modified them, et cetera. So now we've got some actual bare-bone compute and chassis. We're probably going to need to upgrade them a little bit. Most of them don't come with the kind of RAM I wanted, so shopping around, find modules when they're cheap. The RAM prices are going up and up at the moment, weirdly, but I'm going to need lots and lots of RAM sticks. I also want lots of storage, so I need lots and lots of hard drives. But I'm not buying snow or fancy SAS hard drives, I'm just buying really cheap Laptop hard drives. Most of them come essentially brand new because they've been pulled out machines when people put SSDs in. And they're quite cheap to get hold of, just kind of shop around. So I have a whole part of hard drives. And of course, I'm kind of maxing out all the storage in my chassis, so I need some road controllers. And again, basically the cheapest ones I could find anywhere, so that I can connect as many hard drives into them as possible. Then we've got all these servers, lots of storage, lots of memory, lots of CPUs. We need to talk and we need to get them talking to each other somehow. And one of the things that I, previous environments I've worked in, we've had, say, an NFS server that we've run all our VMs off and it's gone over a gigabit network. It's just really not fast enough for the type of storage and what I wanted to play with. So I wanted 10 gig, which if you go and look at the list price for a brand new 10 gig switch, you're talking probably 10, 20,000 pounds, which is just totally unaffordable for me. However, there's this great thing of, there's a company called Quanta, which is essentially a white label switch manufacturer. So they produce essentially board com reference designs and they're mainly bought by big massive data centers and then eventually ended up on eBay. So I was able to buy a 24 port 10 gig switch for $300, including shipping from the US. And that's actually, the complexity there is the manuals are almost impossible to find and take a huge amount of Googling. And eventually I'd actually managed to get a mate who works with an e-mail to send me a copy of the FastPath manual. And the firmware is even more difficult to find. But what was quite interesting is when I took the lid off the switch, it did have a little Amazon label over the flash memory. So I'm suspecting it came from Amazon data center. And I kind of really wanted 10 gig because I want fast VM to VM traffic and also fast storage traffic. And so I got some switches. Now I need some Nix. Well, again, if you look at list prices, they're quite expensive. One thing to really note is cat six based 10 gig is really, really expensive and actually more latent than fiber based. So I went all fiber because it was much cheaper. So if you shop around, kind of look around, find stuff when you get it, you can get 10 gig Nix quite cheap. And you know, gigabyte switches are 10 a penny really. And don't cost anything. But because I ended up going 10 gig fiber, I need an awful lot of these little things. These are called SFP plus modules. So they're essentially a fiber optic laser and photo diode that you then plug into the chassis, into the switches and the Nix. With the SFP plus based stuff, you can use something called direct connect. So direct attach cable. But they're almost impossible to find cheaply, to be honest. So the actual modules are easier to pick up, but you do need one for each end. And you can also change the fiber and get different lengths. The kind of interesting thing with these little things is it's really easy to find basically the unbranded ones quite cheaply. So the actual OEM manufacturers. However, some of the switches will only let you use branded ones. So if you've got a Cisco switch, it will only let you use Cisco SFPs unless you dive into the debug commands and disable that. Thankfully the switches I got don't care because they're broadcast design. They just don't care about anything. What's quite irritating though is the Intel Nix driver in the Linux kernel will only allow you to use by default Intel SFPs. There is thankfully an option to turn that off from the driver. But it's kind of irritating that a free and open source kernel actually limits what you can do with your physical hardware. So I've got everything now. The only things that are easy to forget is the simple things, all the cables to plug it all together with. All the little weird and wonderful cables that you might need inside your servers that you end up getting shipped from China or something that might take a while to arrive. And the different types of power cables you might need based on your data center and what kind of PDUs are provided for you. And you're literally going to need some nuts and bolts because you need to screw stuff into a rack. And you probably want lots of cable ties, probably Velcro based. And don't forget the old serial adapter to talk to those switches. But now we've got all this. What are we going to do with it? Well, one of the great things I like about Linux is it allows you to do some cool stuff in networking that most network engineers don't like you doing. For example, the bonding driver has a mode called balance RR, which allows you to round robin packets across NICs. So it's a great way to build faster interfaces off cheap interfaces. So by keeping my switches separate so that they don't ever talk to each other, I can get 20 gigabytes of throughput across my network for storage traffic. And so essentially the design is I want a kind of homogeneous setup where each node in the cluster is doing both compute, storage and networking. So each VM server is running VMs. It's also running storage and it's also running networking. And then we've got a couple of gigabit switches that do things like lights out cards, the host-to-host traffic for management purposes, and the incoming internet. And then the majority of the storage and network in the VM to VM traffic all goes over the 10 gig interfaces. And in the end, you end up with something that looks a little bit like that. So a bunch of servers in a rack and lots of cables, and you try to make it as neat as possible and then you try not to break the fibers. And voila, you have a bunch of hardware that's out somewhere using a lot of power, not doing very much. So I've got all these things. What do I do with them now? Well, this is where it comes down to some smart software and start playing around with different bits of software. So the easiest place to start with is going to be VMs. So most clouds, their predominant unit of computing is a virtual machine. And one of the unsung heroes of virtualization is probably Libvert, which is an API to manage virtualization. So it allows us to work with multiple hypervisors. So it works with Zen, KVM, KMU, it even works with the VM wire, et cetera. But for this purpose is this. KVM definitely seems to be the predominant hypervisor at the moment, so I'm going to go with KVM. And thankfully, that's really easy to set up in OpenSUSE. So OpenSUSE, Leap, install the pattern, KVM-server, or KVM underscore server. And we have everything we need for KVM, Libvert, installed. So one of the reasons for using VMs as opposed to, say, going with alternative units of computer, say, a container, is that they've got a pretty mature technology now, and they've got very good security boundaries with all of the CPU extensions over the last 10, 15 years. The boundaries between the VMs is quite a lot stricter than, say, the boundaries between containers. So when we're running multiple tenants that we definitely don't want to be able to talk to each other and don't want to be a security risk to each other, they give us probably the best security guarantee at the moment. And it allows us to be very flexible, allows people to run whatever operating system they kind of want, essentially. And that allows me to run multiple different OSs in kind of my development stacks. So Libvert uses XML for its configuration, and it can get a bit long. This is as short as I could get it to fit on the slide. And essentially, all we have to do is really configure devices that we want to provide to the VM. We don't really have to know much about the underlying technology, how it's actually kind of running. Libvert will deal with that for us. So we basically tell Libvert its name and ID, how much memory we want to give it, how many CPUs we want to give it, a couple of options about how we deal with CPU feature sets, which will become more important later on. And then we kind of get into devices and what disks. And to start with, we're going to be storing our VM on local disks to get us running quickly. And what network interfaces we want, and different types of controllers, and then good old graphic stack, so VNC-based, and keyboard and mouse, et cetera, to allow us to actually get a shell onto the VMs at some point. And to start with, we can do things simple. So we're just using local storage, using what's called QM, QCal 2 storage. So that's storing the disk volume, so essentially the emulation of a hard drive as a file on your file system. But it's doing it in a way that it can grow as it needs to. So one of the problems you often have with running VM platforms where you've got the disks, they say you allocate your 20 gigabyte disk. You don't necessarily want to take 20 gigabytes out of your storage. So what you want to do is called thin provisioning. So QCal 2 gives you the easiest way to do that, running on Linux and Librevert. And then we're just going to connect it into a bridge that we've got to get access to the internet. So this is kind of as simple as we can do. We get a VM before running. And then we can use CinqledVertManager to connect to that VM and see it running and use the console. And LibrevertManager is pretty reasonable. It allows us to configure most things about the VM, see performance, control it, turn it on and off, add more devices and root devices. Because it also can talk over SSH, it's really easy for you to connect into your remote data center and manage your VMs. The other nice thing about Librevert is it's got complete API that you can use with bindings in multiple languages. So there's a Java developer that has Java bindings. And I actually have written my own little web UI that allows me to control my VMs as well. But what I really want to do with my VMs is be able to migrate them. And I want to be able to shuffle them around the hosts in my cluster. But I can't do that with local storage because I can't move the storage. So live migration is really great in that it will copy the memory of the VM over to the other host, pause the VM and then start the VM up at the other place. But it all relies on that you have shared storage across your VM cluster and that you can move the state of the VM over to that node and still have access to all the storage. So we've got a number of options to do that. We could say use NFS or ISCSI. But then that kind of violates my initial design of being homogeneous. And it's kind of a bit too simple to be honest. Or we could use some DRBD, director of application block device, which has actually some really nice clustering features now as of version 10 release. So it can do multiple node clustering and replication. I initially started off playing with something called Sheep Dog, which is a little kind of software-defined storage system that's mainly developed and used by I think it's NTT over in Japan. And that was actually really easy to get up and running. I've got pretty reasonable performance off it. And then I came to OpenSEAS conference back in 2015 or so, and I was chatting to somebody about Ceph. So I've been thinking about having a Ceph cluster for a while, and this was finally a chance to make that reality. So Ceph is a software-defined storage project that allows you to turn a bunch of disks a little bit of compute capacity into a kind of storage network. So Ceph basically provides all the smarts to turn those bunch of disks into a shared storage system. So we can access that storage from any node in our cluster. We can read and write the volumes of any node in our cluster. And we distribute that load across our entire cluster. So each of my servers has got about 15 hard drives, and all of those, well, 14 of those hard drives, are then partaking in this Ceph cluster. Ceph is then managing all of the data replication between all those devices and redundancy. So you provide Ceph with the rawest descent components, and Ceph will do all the smarts for you. And it's kind of relatively simple to get up and running, a bit of head scratching of places. So internally, Ceph is what's called an object store. So all it really understands is little chunks of data, so an object of, say, four megabytes of binary data. And that's then distributed around the cluster. So if you're storing a 20-gigabyte VM volume, you're going to chunk that into lots of little blocks and then scatter that around the cluster. And to do this, it uses what's called the crush algorithm. So it knows the state of all of the disks essentially in the cluster and how they're pulled together into what's called a placement group. And that's placement group is then assigned to a disk, and then there are backups. So Ceph will be able to then calculate from essentially the idea of the object where it's stored. So the whole point of Ceph's architecture is not to have any central points. So you don't have to talk to something to access your storage. You talk directly to the disks, which makes it kind of unique for some of the storage systems. A lot of them rely on you connect to essentially an orchestrator or distributor, which will then talk to the raw storage. This actually gives you direct access from the VM down to the individual disks, those chunks of data. And it also provides layered on top of all this a whole bunch of different interfaces. So there's something called RDB, the RADOS block device, which is what we're primarily going to use. So that is a way to provide raw block devices out of Ceph. So that's exactly what we want to consume in a VM. It's also got Ceph FS, which is a file system. So you can mount Ceph as a file system directly into your servers or into your hosts. It's also got a gateway that implements an S3-style storage API. So you can also have an object store over HTTP, like you would in Amazon. And all of this then backs onto the same storage cluster. So you end up with a very flexible system that allows you to provide access to storage in multiple ways. So one of the key things with Ceph is it's designed not to have a single point of failure and it's not designed for everything to be able to talk to everything, essentially. But something needs to coordinate all this. So it has a daemon called the mon or the monitor nodes. So this is essentially Ceph's consensus system. So it talks an algorithm called Paxos between the three or more nodes. And they decide the state of your cluster. And then every other component asks those services for the state of the cluster and who to go talk to. So when you're deploying Ceph, this is the first place you start. So I used a tool called Ceph deploy, which is a bunch of Python scripts, essentially, that allow you to deploy Ceph. And you can literally zip a Reion Ceph deploy. And then tell Ceph to connect to a node, install Ceph, and then create a monitor. And then the monitors will talk to each other and you have the basis of your Ceph cluster. So before you can do everything else, you need to make sure you've got these monitors. You can one with one monitor, but then you've got a single point of failure. So because it's all consensus-based, you need a majority of nodes at any time for the cluster to work. So if you have three nodes, you can lose one monitor node. If you have five nodes, you can lose four nodes or five nodes. You can then use two or three different nodes. So you always need a majority of the cluster to be present for your storage system to work. So you've got to be careful and think about this because it depends on how you do maintenance because if you go and take all three of your mon nodes offline, you've then broken your entire cluster. And so once we've got our mon nodes all running and deployed, we can start adding some disks in. So as I mentioned, Ceph, basically, you provide it with a raw disk. So you don't do any fancy stuff like have a RAID array of all your disks and provide one array up to Ceph. You provide each disk individually because Ceph wants to see the raw volumes so that it can manage the replication across that cluster. So to add a disk into the cluster, we first off need to zap it, which it raises all day 12-net hard drive. And because we're using Ceph deploy, we're doing this remotely as well. So you have to make sure you get your device names right and don't say SDA, which is probably your operating system. So we've got a nice blank disk. We can then do prepare on it, which will set up the partitions that Ceph wants. And you could potentially then use, when I deployed Ceph, it essentially uses an XFS file system at every disk. But the newer releases of Ceph now supports a blue stall, which is a more optimized storage system, which I need to migrate to at some point. And then we can tell Ceph to actually add that disk into the cluster. So by activating it, we then see some data. We see some resource available in the cluster for us to use. And then we go through and do that across all of our disks, which takes a bit of time, to be honest. And as we're adding disks in or out of the cluster, Ceph will start rebalancing data around it if you've got data there. So when a disk fails, it will automatically start copying that data that was on there that's been backed up somewhere else. It will start backing that up again to always make sure you maintain a number of copies. And as you add more disks in, it will rebalance the data across the cluster so that you always try to use as many of the resources as possible. So now we've got some disks in the cluster, we can actually start using them. So in Ceph, we create pools. So pools have a number of placement groups, which is then spanned across the whole cluster. And this is when we set up replication options. So for the simple setup, we're going to create a RDB pool. So that's just the name. It doesn't actually relate to what it does. And we then specify the number of placement groups we want. And you have to kind of size this slightly magically based on the number of hard drives you think you've got or you might have and to how many bits of data you're going to just put on each hard drive. So I've got 96 hard drives, so it gives me about 100 or so chunks of data on each drive. And we're going to use replicated mode. So Ceph's got two ways to do data, kind of make sure it doesn't lose your data. The easiest option is called replicated. So it's essentially a bit like RAID 1. So every write to Ceph will get written to three devices or however many you configure. And then it will always maintain those number of copies of your data. There's more advanced options called erasure codes, which is essentially how RAID 5 works. So there are lots of very complicated polynomial maths that thankfully Ceph's can do very quickly these days. But it does much more require on that you have quite a lot more servers and quite a lot more disks than I've got for it to actually have any trade off of giving you more storage space for all the cost. When I originally started playing with Ceph, it didn't support thin provisioning on erasured volumes, on erasured pools. So you can't actually use erasure volumes for VM storage at the time. I think you can now with the Loomis. And once we create the pool, we're going to then set a couple of key options. So we're going to set the number of replicas we want of everything. So set size three tells Ceph that it all wants three copies of my data. And then we set a minimum size so that it will stop writes to the cluster if we ever lose more than so we can lose one disk out of the cluster essentially. Well, when we lose one backup, it will keep working. But if we lose two backups, it will then stop writes into that because it's now only got one copy of the data left. So that allows me to say potentially turn an entire machine off for maintenance purposes. And my data is still there and everything still works. If I turn two or three off, then it gets a bit more problematic. And then all of this is because we want Libvert to be able to talk to it. So we need to add a user for Libvert. So Ceph has an authentication protocol called Cephx, which is a little bit like Kerberos. And that essentially ensures that everything is authenticated when talking to all the different demons because your client is talking directly to the disk demons. So we create a user for Libvert. And then the next thing we want to do is get Libvert to talk to Ceph. So this requires us telling Libvert the credentials of Ceph. And in the glorious way that this works in Libvert, you have to define a secret definition which tells Libvert the type of secret you want to store. And it gives it a UUID. And then we actually tell Libvert the secret. So we define the secret and then we actually get the value of the token. So that's a base 64 encoded, 128 bit value or something. And then we have to insert that into Libvert. And we've got to do that on all of our VM servers. So I use Ansible to do that. So now we're in a position where we can actually get a VM to back onto Ceph. So based off our original example, we can then change our disk definition. So rather than using QCAL2 file, we're going to put it into Ceph. So first thing we want to do is put the actual file into Ceph. So thankfully the QMU image tool, which is part of QMU or the install of KVM and Libvert can write into Ceph and read out of Ceph. So we can convert the QCAL2 file we used originally and store it straight into Ceph. So we essentially copy all that data off the local disk and store it into our cluster. And then we can edit the definition of our VM to rather than using a local storage, we can use Ceph. So we have to tell Libvert the secret to use to talk to Ceph. And then we have to tell Libvert the monitor host to go and talk to. So first off, Ceph, the kind of Ceph client will go and talk to the monitors, get the state of the cluster before it can then go and talk to all the underlying disks. So when we configure, as I initially thought it was probably the list of all hosts, it's just the list of the monitors. So I've got five monitors listed and then the port number to talk on. And now I have a VM that's running on shared storage that I can now migrate around as long as I've configured my CPU types currently. So I then kind of decided one night that I should upgrade to open source 42.3. And maybe I'd have to feed you many beers at lunchtime or something. It sounds like a good and simple idea. So, you know, zip it up. Oh, great. I've got loads of errors in my Ceph cluster. That's actually because I've upgraded Ceph by accident. Usually I think it's advisable to plan upgrades a little bit more than that. But thankfully it all worked out really well. And I decided at that point I should probably go and read the manual to work out how to upgrade, which I was kind of lucky in that the first node I'd done this on was one of the monitor nodes. And you have to monitor nodes first. So that kind of all worked out quite nicely in the end when I upgraded the other five monitor nodes and then went around and upgraded all the storage nodes. And I was very lucky because there was a particular option you have to set for the latest release of Ceph. My cluster was new enough that that was a default. Otherwise I'd have probably been in trouble. And the only kind of thing that then caught me out was Ceph. It's the latest release, Alumnus, has introduced a manager daemon that gives you more information about the state of the cluster. And that requires more setup that I didn't know about in advance, but that was fine. And so this is a screenshot of the manager dashboard for my little cluster. There's not very much going on at the time I took it. And you can drill down and see all the different disk daemons you've got, whether they're in the cluster or out of the cluster, and all the pools and stuff in the pools. It's very good for out of the box. So now I've got my VM running on shared storage. I want to be able to bootstrap it somehow. So I'm going to be able to provide some metadata to it that will then configure the basics, such as network interfaces. So to do this, there's a project called CloudInit, which is used on most of the public, clouds, to the distributions, and open source, support system, and so on. And there's various ways to provide metadata into CloudInit. You can, say, use a file, basically a CD-ROM file system with some data in it and pass it up. However, I decided I wanted to use a proper metadata API. So I first started trying to implement the SWS metadata API, which is very complex and sprawling. And then found out the digitalization one is quite a lot simpler, so I'll do that one instead. And then found out that the data source for digitalization instead will only work on digitalization. So I ended up writing my own data source. And actually, that was really quick. In fact, that's the entire data source I wrote in Python on that slide. And that essentially then connects out through a network interface to a daemon running on each of my hosts. And then that daemon can look up in the art table to find the particular system scan tool to and provide all the metadata configuration for it. So now we've got some VMs running in Bootstrapped. We want them to sort together. So I want to do this as hard as possible. So I decided to use something called VPP, which is VET packet processing, which is a very high-performance user space networking stack. And that essentially gives you a full layer 3 router in software. It's primarily written by Cisco. And we can connect our VMs into it using something called V host user interfaces. So this allows our VMs to talk without transitioning into the kernel, to talk to VPP, to send networking packets. And we can easily create a socket and add it into a bridge in VPP. And then on the other side, update, liberate, talk to that. So using a particular type of networking adapter. The only difference here is that we have to make sure our VM is backed into huge pages, which is slightly problematic, but not too much of a problem. And once we've got VPP talking to our VMs, we need to talk between hosts. So ideally, you'd use something called DPDK, which allows you to use the real interface, but basically doesn't work if you've not got an Intel NIC. And lots of other BIOS settings tweaked. So I used the host interfaces, so talk through the Linux networking interfaces using something called AF packet. The only complexity here is you need to make sure they're in promiscuous mode, because VPP is managing all the MAC addresses and IP addresses. And now we've got all our hosts talking to each other. We want to overlay networks so that we've got a VM on host A and a VM on host B, and they want to talk to each other. So we're going to use something called VXLan for that. And we're going to create tunnels from each VM node to each VM node for each tenant that we've got in our networking stack. Now the problem here is that gets quite a time consuming to do manually. And VPP has a nice feature where if you exit the process, it loses all of its state and it has no configuration. So actually what you now need to do is write a whole daemon to manage VPP. So I've started on bits of that, but we'll see where it goes. And my big question at the moment is then how do I do all the routing, which I haven't worked out. I've partly worked out how to solve, but not quite yet. And to wrap it all up, was this a smart idea? I don't know. I should have just used open stack to be honest, but I wouldn't have had fun learning all this in the time and sharing it with you. So at that note, any questions? None? Right. Okay. So my final thoughts are actually our VMs and the cloud worth it at all with things like Kubernetes. So Kubic is doing great work on building a Kubernetes platform. And actually, is Kubernetes getting good enough security barriers that you don't need all of this complexity? That's kind of an interesting thought. Is all this a waste of time? Anyway, thank you very much. Okay, thank you. And I'll see you soon when we get to Ponzi's newicked apps. Alright. This is not going to take long.
Driven by curiosity and some late night ebay purchases, I ended up down the rabbit hole of building a cloud from scratch: why use OpenStack when you can do it the hard way. This was a great excuse to dive into the various subsystems required to assemble a cloud and to find out how frustrating aspects of it could be. A cloud is a jigsaw, requiring many different pieces to fit together and co-operate. This talk will take a look at a number of Open Source technologies and how they fit into this puzzle: First you need a way to run Virtual Machines, this is probably the easiest part of the jigsaw. Next you need a way to store and distribute your Virtual Machine volumes. Ceph fits in here nicely. Then you need a way to connect all your Virtual Machines together. You could just use the Linux networking stack. Or you could look at VPP, an exciting userspace networking stack born out of Cisco. Great now your VMs can talk to each other and things, but how do you do that first boot configuration: well hello Cloud Init. Finally you need a way to push traffic to your VMs, enter HAProxy.
10.5446/54515 (DOI)
All right, we're going to create a complete Tor Onion service with Docker and openSUSA in less than 15 minutes. Unfortunately, my presentation is a little longer than 15 minutes, but not much more, and I promise to keep my slide deck short. But first of all, what is Tor? Tor is a free software and an open network that helps you defend against traffic analysis, which is a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security. How does Tor work? Tor works basically like this. So Alice's Tor network obtains a list of Tor nodes from the server directory. Her Tor client, which is normally the Tor browser, picks up a random path to the destination server. Green links are encrypted, red links are in the clear. So anything inside of the Tor network is encrypted. Doesn't matter if they're using HTTPS, HTTP, doesn't matter the protocol, it's encrypted. But once you leave the Tor network, then it's normal traffic. If at any time the user visits another site, Alice's Tor client selects a secondary random path. Again, the green links are encrypted, the red links are not. What isn't Tor? Now, if you read anything about InfoSec or about the security community, you're going to hear that Tor is a bad place. What isn't Tor? Tor isn't the dark web. Tor is a security and privacy network. It is used by normal people, journalists, activists, bloggers, law enforcement, business executives, militaries and IT professionals. I'm not going to go into this long spiel about each one of these, but I did leave a link here so you can actually see more about how different groups of people are using Tor. But it has problems. It's got bad neighbors. There are lots and lots of bad websites on the Tor network. We know what those are. Those are drug markets. Those are crime markets. Those are really bad. There are really bad things. And because of this, it's got a bad reputation. I would not do this presentation at Suzicon because in a corporate environment, people aren't going to want to hear about Tor because it sounds bad. It's got a bad reputation. It's edgy. It's also a little slow. Because of the encryption factor within Tor, where everything is encrypted at least three times to get from you to the outside internet via Tor, you're going to have lag. And there's no way around that. But we can help change the face of Tor network. We can encourage security and privacy advocates and users to harness the power of the network. We can encourage nonprofits to mirror their websites on Tor. We need good neighbors. And of course, you can build your own onion service. If you've got a blog or you're thinking about doing a blog, maybe put it on Tor or maybe mirror it on Tor. If you've got a nonprofit, especially a nonprofit that deals with information such as whistleblowers, mirror your website on Tor. Make sure that both you and your users get that extra level of security and privacy. How do containers fit in all this? And how are onion services made? Well, the old way, and this is not using containers, Bob wants to build an onion service. This is what we call the Tor website. He's got a website already, but wants to be available on Tor. This is to protect the privacy of his users. He installs a Tor service, Zipper and Tor. He's a good Linux user and uses OpenSusa. He edits his Tor RFC file and tells it to listen on port 80. He starts the service. He gets a new host name and anybody can go to that website via the Tor browser. The new way, Gecko wants to build an onion service, but he's very protective of his privacy. He's got ideas and information that he would like to share with the world, but he doesn't necessarily want people to know who he is for whatever reason. He creates a web container and attaches it to the Tor container, which is what we're going to do today. He never opens port 80, 443 or any other port locally. So if you do an in-map on his server, you're not going to see those ports open. They are not open. Apache thinks it's open and Apache will bring in traffic on those ports, but they're not really open. The Tor client or Tor server, Tor service will actually bring in that traffic and make Apache think it's coming in on port 80, but it's not really. He finds his new onion host name on the container. We're going to use a little script to do that for us, but it's going to be there. Accessing onion services. Alice here is about Bob and Gecko's websites. She installs the Tor browser. She's a good open-susi user. Zipper and Tor browser launcher. Anybody with a laptop here who is not, doesn't have Tor already installed, go ahead and do that because we're going to be doing some live exercises. It's not the fastest thing to download, but it's out there. Of course, it's free. She puts in the onion URL like any other. Neither Bob nor Gecko ever see who she is or any other information that she doesn't want to give. Matter of fact, if their website is just plain HTML files, no scripts, no JavaScript, nothing else, just plain HTML, there's nothing that he can see about her. There's nothing she can see about him. She doesn't know who Gecko is because he doesn't explicitly say. There's no easy way to find out because Gecko is not an idiot. Gecko doesn't go around on Reddit saying, hey, I've got this website. He just has it out there anonymously so that way he can give information that he wants to share. She appreciates that she can also view Bob's website securely and without fear of being monitored. What are Docker containers? We're going to get into the container part now. A container image is a lightweight, standalone, executable package of a piece of software that includes everything needed to run it. Code, runtime, system tools, system libraries and settings. How does this help us? It's easy to run several engine services at once. You don't have to know how to set up the individual pieces. You don't have to be an expert in MySQL. You don't have to be an expert in Apache or Internet. You focus on your content and not on your administration. Wouldn't a VM work also? Sure. There's nothing wrong with running a VM. However, if you don't want to research all of the running pieces or if you're new or if you're highly constrained, for example, you've only got a VM and you don't have the resources to create another VM inside of it, which is not usually a good idea, containers make more sense. Some comparisons between virtual machines and containers, isolation, portability, what they contain and the speed. I'm not going to go through all of this. The big thing to point out is that containers run a little faster. They have some limitations. VMs require more overhead. They run a little slower, but they also contain a full operating system and not just a piece of an operating system needed to run an application. So our demonstration. Today we're going to use three containers. The first one will be a web server running Apache, a web server running Apache on WordPress. The second will be a MySQL database and the third will be running Tor. For demonstration, I'll be using Docker images to create the containers and the Docker compose command to set up everything quickly and easily. Using these steps, you can replicate this example website on your own. The dependencies, all you have to do is zip around these commands, these applications, if you don't already have them already, the dependencies will take care of everything else. And here's my Git file, and I'm going to go ahead and make this smaller. I'm going to go ahead and get this in myself. Copy. By the way, is this big enough for everybody or do I need to make it bigger? Good? Great. So we've just done that part. Make this big again. Let's look at this Docker compose file that we just pulled down. One of the first talks about, even though in the previous slide I said it was going to be the Apache then MySQL then Tor, and the actual file I've written is Tor first because it's easier. So let's look at the Tor service. This Tor service is running off of an image called Goldy slash Tor in service latest. Goldy is actually pretty good in that his images are up to date, and he generally keeps the latest version of Tor in his image. This links to the WordPress image in that the WordPress image contains Apache, which is going to be listening on port 80, even though it's not really. It's going to save its hidden service keys into a local directory in the Tor directory. It's pseudo looking for ports on port 80, even though it's not really. The database image is running MariaDB. In this image we can predefined our variables. These are really crappy passwords I know. MySQL root, the name of the database, the user and the password of the user, and again we're going to save that configuration to a local file, so if we do crash this container, we can just bring it right back up and not lose any data. And then the WordPress container. We're using the standard generic WordPress image from hub.docker.com. We're going to link this to the database because the WordPress needs the database in order to work off of. The environment variables have to match what are here. So we know that the database name, user and password are here, and we have the same thing here. The database host, by the way, it knows that the name of the database is DB, so it's looking for DB on port 3306. This is, in this case it actually is a real port, but it's inside of Docker. It's never seen on the outside. We're not going to publish this on the outside, it's just inside of the containers. And then we just Docker compose up, minus D. Minus D will run this as a daemon, not live, so oh, yes. They are not. I do have a lamp stack image that I wrote myself, but I haven't gone through all of the work of rewriting all of them for OpenSusa. The Tor one is based off Alpine, which is the smallest distro right now for container images. The WordPress and MySQL are based off of Debian. I've rewritten the MySQL one for less ones, but I haven't rewritten it for OpenSusa. So we're going to start the containers. Docker compose up, minus D. And that's it. Docker PS. We have three containers running. They've been up for a few seconds. These are brand new, like as of now. So I've got a little command I'm going to run, little script, Docker exec. And then the OpenSusa Tor, a con Tor, Tor one container, and then onions. Onions is a little script that gives us the host name. And there's our host name. So anybody running the Tor browser should be able to open this and probably hack me. Not the fastest application. Or you can blame it on my laptop either way. And you might think, well, this is a VM. You're connecting from your browser to the VM. Big deal. No. I'm going from this browser to Germany to Ireland, Netherlands to my VM. Each of those hops has an extra layer of encryption. This is what's cool about this. This VM has a 192.168 IP. I don't have access to this building's router. I can't do any kind of port forwarding. What's happened is that the Tor service went out to the Tor network and said, hey, I'm here. Here's my host name. And this is what I'm listening on. The Tor browser went out and said, hey, I've got this host name. Can you send me there? And it does. It does not use real IP. This is actually running through UDP. So that's just a simple website. A simple WordPress installation, not the most interesting or secure setup. But it works. It works quickly. That's the power of containers. I didn't have to go here and set up a VM for you. And then I could use Salt. I could use Salt or whatever to set up everything for you really quickly. But in this case, setting up the container worked. And it worked well. But that's not all. At my home laptop, I'm running a Go for server on Tor. Why? Because. You want to run Tomcat? Again, very simple configuration. And we can look at that file. It's very similar. I am, Dr. Compose. Again, we're using the Goldie image for Tor. And we're using Tomcat, which is based off of OpenSusix. I wrote this. So you can run Tomcat. But don't just take my word for it. I know Tomcat normally runs on 8080, but I make it easy. It helps if I get the right pass IP or right host name. So we're running Tomcat over Tor. It's just a matter of bringing up the image and it works. And that's pretty much it. I promise to keep this short and sweet. And we did set up the entire website, the entire WordPress website in less than 15 minutes. If you want to play around with Tor, this is an easy way to do it, especially if you're concerned about security. Any questions? Yes? You're right. The best way to do it is to use a hardened web service like EngineX. Worker does have some hardened web services out there that you can look into. But it's really up to you. You need to do your own due diligence on that. Anybody else? Excuse me? There is. There has been some experimenting with Darker Swarm, but I haven't gone into it that much. I'm hoping this time next year I'll do one on how to do the same thing on Kubernetes. So we get through load balancing, through distribution. Yes? Excuse me? No. I have not advertised the physical location at all. Look for the fact that you know that I'm here and I'm doing it. But no, you need to be a government-level analyst to be able to find the physical location. What happened in the case of one of the dark net markets, the person who was running that gave a lot of clues about himself. If he had just stayed quiet about who he was, he probably wouldn't have been caught because it's very, very difficult to backtrace someone running a Tor service. The way they normally do that is either they do just normal detective work or there are ways such as finding a vulnerability in the Tor browser, hacking that, and then that gives them away. At least that's what the government has told us so far. There might be other ways, but that's what they've released so far. Anybody else? Okay. We're going to wrap it up then. I hope everybody has a great rest of the conference. Thank you very much. Thank you.
In a way, both Docker and Tor are shrouded in mystery. Containers have been the biggest thing in the IT field in the past few years and yet a lot of people don't know what they are good for. Why not just use a VM? Likewise, Tor is known only for the negative uses and connotations such as "The Dark Web" and "The Deep Web" meanwhile not many people know about the actual positives of the technology when it comes to secure communication and privacy. My presentation would be a short primer on both of these technologies followed by a 15-minute demonstration on how to create a Wordpress website, MariaDB Database, and Tor entry point with Docker on OpenSUSE Leap 15 that can be recreated on any hardware or VM even without an external IP and accessed anywhere in the world using the Tor Browser. There should also be time to take Q&A at the end. My docker-compose files, notes, and presentation will then be available on GitHub.
10.5446/54516 (DOI)
Good morning. Let's get started. My name is Andreas Verber. I'm a project manager for the architecture at SUSE Labs. Right now I'm going to give you an update on the cross compiler tool chains for OpenSUSE. So first of all, what is this about? This is not about systems that are actually running OpenSUSE, at least not in general, but rather about small microcontroller systems that don't have a whole lot of RAM and code storage. But at some point you want to develop software to deploy on such microcontrollers. You want to get the code that you have developed onto such microcontroller boards. And once you have that, maybe there are problems and you need to debug them. So I'm going to run through all of those stages more or less. Two years ago at OpenSUSE conference in Nuremberg, I had presented the first real cross compiler work. So we had some ice cream cross compilers before that could be used for developing kernels for systems running OpenSUSE so that you could take an x86 system and develop kernels for S390 originally or also for power and later ARM. And now it was also possible to develop with a standard C library code for non-OpenSUSE targets. The first one was the epiphany target. This was used for the paralleler board and a crowd-funded ARM board which has an FPGA and via the FPGA it connects to this code processor chip. And then the other one, the second one was for RX, RENASAX Extreme, because there was the sacro board and also some other boards that it was possible to actually deploy the code to. So just for completeness, what existed as well as I mentioned, there's also cross compilers for OpenSUSE in particular for OpenSUSE kernels, not really for OpenSUSE applications. There's the client compiler tool chain as part of the LLVM package which provides several targets depending on how it's being configured for us. But at least for x8664 we build pretty much almost everything we can. There's also a tool called SCC which is its own compiler tool chain and is being used for microcontrollers that have less than 32 bits. So for example, STM8 which is 8-bit microcontroller and the 8051 architecture as well among others. So moving on to what is actually new this year. So Richie has been working on avoiding the need to specify the cross compilers by the exact version name. So the way that they are built, they are built as part of the GCC7, GCC8 and so on packages and that means that the binary that gets generated will in the end be something dash GCC dash 8 for example. But normally make files when you build OpenSUSE packages will assume that you simply have a CC or GCC compiler with a cross compiler prefix. So this would require either patching make files or at least overriding like a handful of different variables in order to be able to use such a tool chain. Using the alternatives mechanism we now get some links from GCC to GCC7 or GCC8 whatever is has been installed on the latest or configured by the user. Which in turn allows us to just use this. Do we have any laser pointer here? No. So we can just use this. Well, you can see it anyway. So this cross compile variable should in most cases now be the only one that you use and foo dash would be the prefix that is being generated for the specific tool chain. So that would be rx-elf dash would be the cross compile prefix and then GCC, LD and so on all the tools. A second development was that after a phase of announcement we moved the new lip package from base system where it was alongside G-Lip C and user lip C into the developed GCC package which had the advantage that we can better stage changes because then we have GCC and new lip in the same place in particular when we add new tool chains. Otherwise we always have a, well we still have a cyclic dependency between the packages but at least there we can test it before it all goes to factory. And while the remaining problem with that is that we always need to take care to submit both the GCC package and the new lip package or whatever C library is being used for the respective factory. There's also AVR lip C for example, possibly further ones. They always need to be submitted together and this hasn't always worked out that sometimes we had unresolvable GCC cross packages in factory. What is new for my site this year is that for several months now we have a new ARM cross compiler tool chain which is able to develop code for use in either firmware or microcontrollers. So that would be the Cortex M class of processors as well as the Cortex A class where we don't need anything specific. This was originally driven by the Spectre and Meltdown security vulnerabilities that I will be going into in the next presentation slot. We needed to update a software package from version 1.4 to 1.5 and suddenly it grew a dependency on not just compiling code for either 64-bit ARM code or 32-bit ARM code but it needed both in the same package. So as a solution we would be building it on AR64 using the native GCC there but also using this new cross compiler in order to build parts of that code that would be reused in that. The exact list of where you can use this compiler is probably much longer than what I have listed there. I'm not going to read all of this. What's noted in brackets there is the various ways that you can actually get the code onto the board. The CST microelectronics has their own ST link mechanism which is a USB adapter that several tools exist for getting the code onto the board. Then there's the CMS-STAP standard that was developed by ARM which is also USB-dased with two different tools available to get that on there and then there's the J-link as well as in some cases you have systems with heterogeneous cores where you have both Cortex A and Cortex M cores and then you can just boot into the Linux system and use certain commands to put code in place to be executed by the microcontroller cores or real-time cores part of that system. What is relatively new still is that we not only have a port of OpenSuser running on the RISC-5 architecture but rather that we also have a cross compiler in order to develop code for the initial set of RISC-5 microcontroller boards. In particular here the Hi-5.1 was quite known in the press. I have to admit that I have not yet tested this on the board actually because we don't yet have our packages set up in order to actually get the code onto the board. Has anyone in the audience maybe experimented with that already? No one, okay. Oh no, this is what you are thinking of is the higher and unleashed which is like 999 but that's the one that can actually run Linux. This is a small one that's just Arduino form factor and can run only microcontroller code, no Linux. Not taking credit for this myself, there were several people that have also been working on integrating AVR tool chains that were previously in a separate cross-toolchain project into this new set of GCC packages. Richard Beiner has been integrating that one of the tool chain maintainers and there's lots and lots of boards in particular the original Arduino boards and several clones of that that can be used with. There's also tools like AVRDude that can be used to get that via a serial connection onto the boards. Moving on to some stuff that is not yet in factory. We've been in contact with a company called Andes in order to make packages available for their proprietary microcontroller architecture. So they're also working on RISC-5 and this is the previous generation of boards that they've come up with. And another one is FT32 which is formerly from FTDI, the company that makes the USB UR adapter chips by now it's called Bridgetech and there are also some low-cost boards available that the code can actually be used with. However, yet again, we don't yet have packaged the tools to actually get the code onto those boards. One topic. Whenever we build cross-compiled code, so that means like the new lip packages that would actually execute not on your local system but on the microcontroller board or as part of some other core on your Linux board, whenever the OBS and RPM scripts run in order to extract the debug info symbols into a separate package, then that has led to binaries for foreign architectures breaking. I'm not entirely sure why that is but we've needed to always explicitly disable the stripping and extraction of those debug symbols from the packages. Would be interesting to find out why that is and whether we maybe can fix that in a central place instead of in every package. So if you want to build some general purpose library or some firmware, then you would need to add at least two lines to your spake file to suppress this functionality. The second one is that in theory we could sit down and build, I don't know, maybe 20 or something cross-compiler tool chains. But for one that would take quite long to build whenever the GCCT team checks in a new revision of the compiler or maybe some patch in OBS. And for another, originally we had packaged the cross-compiles for a number of probably months but we figured out that the installation that certain binaries were getting installed to was not the one where at runtime that we're being expected. So we had built successfully compilers but they were not fully working at runtime in order to find like certain CRT.o files. And basically what we're at the moment still lacking is some package and we're still discussing how exactly to do that. Maybe one of you had suggestions for how to do that for just compiling like a small hello world example to make sure that the compiler tool chain is in itself consistent. So it would not be so much about does it compile code that is actually working on a specific CPU. There's other test suites for that but rather just for validating that if we build a GCC function and a GCCA tool chain for particular architecture that each one of those actually works as expected. And finally, one topic that I mentioned yesterday in the package hub talk is that it would also be cool if we could make some of those cross-compilers available not just for open ZUSA in the OBS but also for the commercial ZUSA Max Enterprise family of products. And there are certain rules at the moment that stand in the way of this in that obviously ZUSA is already shipping GCC compilers for compilation of SLEE code. And as such we cannot just submit the GCC 7, GCC 8 packages that those cross-compilers are now part of into package hub because that would conflict with the packages that ZUSA is providing. So at this point we have cross-compiler tool chains, we're able to turn source code that we've written ourselves into code to be run on such microcontrollers. Now how do we actually get that onto the boards? My preferred solution for that is a package called OpenOCD short for on-chip debugger. Unfortunately the development of the package has not stalled but the releases are currently quite for I guess more than a year, there's been no release but there is active development going on with the Garrett review system and changes going into the project are getting reviewed quite rigorously usually. So my proposal would be that instead of sticking with the 0.10 release that is currently out there for tumbleweed it should be okay if we would actually switch to get snapshots simply because then we could have like support for more chip sets every few weeks or months whenever something new comes out. The problem with that is that there are dependencies that OpenOCD has at one library for interfacing with those J-Link used B adapters. It also uses TCL runtime so there may be points in time where the snapshot of OpenOCD may also require a snapshot of say LibJLink so that would be a trade-off to make. There's another tool packaged most recently called PyOCD. It was originally just a Python library for interfacing with embed boards that are based on this CMSTAP standard. More recently it has also grown some tools that can be run with a command line with a lot of arguments for just starting a GDB server and then via GDB you can get your code onto the board. And finally the latest addition from my side was the ESP tool package. So this is for expressive ESP32, ESP8266 and so on boards based on the extensor architecture. Unfortunately the tool chain for building the code is not yet fully upstream so that we cannot really put that into factory yet but I'm in touch with them about hopefully getting that done in the future. So some closing remarks. There was the question already about risk five so slightly related to that. If you have a board that does not have an MMU but has sufficient RAM at least on ARM and a few other architectures it is possible to run not just microcontroller firmware code but also an embedded Linux not provided by OpenZoozer but using our tools it can easily be built from the Linux sources. There are various ways to go about that so the most frequent case is to use the UcLibCNG and what I have been working with so far is the flat tool chain which means that you build elf binaries and then you convert them to a special flat format. ST has also proposed a new ABI called FDpick. This has been existing for like Blackfin for example for quite some time already. The proposal now is to do such a tool chain for ARM as well. It has been there's a proof of concept out there but it is not yet merged in the upstream MGCC project. This would allow to reuse libraries between executables even without having their own virtual address spaces. And some of the examples that I have tested this on has been the SDM32F4, FM4 originally from Fujitsu and the XMC 4500. Then another remark going slightly beyond microcontrollers FPGA field programmable gate arrays is a way that you can not just develop software but hardware based on software descriptions. So you can configure OR and gates and you have local memory in there that you can also have local storage on and using this one thing you can do is you can implement soft core processors. So you can actually have an FPGA chip and emulate in theory well an ARM system or a risk five is at the moment quite popular because for a long time there were no physical boards that you could run the code on which then opens a whole lot of range of use cases that people might have like Xilinx has those microblaze soft cores, NIOS is another one from another vendor, OpenRisk has seen a few uses and who knows what other cores there are or maybe in the future so that will be something to keep an eye on whether there is any demand for that. And the cool thing is that out of all those families well usually if you go to Xilinx, Lattice, microSemi then you know they all have their proprietary tool chains in order to generate that code from a standard VHDL or very long description but for ISE 40 a few years back someone has actually started reverse engineering the format needed for that and there are now open source tools in order to develop for this family of admittedly slightly smaller FPGAs but still it is a very interesting start. With that I am done, are there any for the questions in the audience? Andrew, shall we get you a microphone? Not working, the The package is all in factory for all architectures so that if you wanted to cross compile on ARM, you could, or on x86 or power or whatever. So it's all available in factory now. Yes, so there are no restrictions as to the code working and in factory ARM you will find obviously the corresponding packages for ARM hosts. What is I think not enabled is the development project. Deval.gc does not have all architectures enabled. So they're not building the full architecture. In some cases where I've said, you know, for example, the prime use case of cross-developing epiphany is on ARM v7. So that's like one tool train that we have specifically enabled there to build, but I don't think that all of them are available there. But if you take a look at the ones in factory, then yes, there is no restriction on building them that I should be aware of. So yes, we have been using, as I mentioned, the cross ARM none tool chain on ART 64. So even on ARM you can cross compile four other ARM systems. That would be the other thing to look out for because we would not build ART 64 cross compilers on ART 64. That's the restriction that we have. So if the name is different of the architectures that we're building them for, then that should work. Any last question? I didn't hear you mention the GCC-AVR and the typical workflow where you use AVR due to program to at ML 8-bit microcontrollers. Is this something that you work on and test and have packages for as well? I personally don't. I just know that the package exists. I'm not sure. It could be either in hardware electronics or in cross-tool chain AVR. But personally, out of all the architectures I work with, I don't happen to actually have an AVR-based board, so I have not hit it myself yet. I've had Arduino builds working doing that, but I can't remember if I took Arduino Studio from within OBS or if I just took Arduino Studio from upstream. But it is possible to do without too much effort. Okay. I guess we'll have to finish here. Thank you very much for your time. If you want to hear more about what we've done with the ARM Coscom Pallet Tool Chain, stay tuned for the next talk. Thank you very much. Thank you. Thank you. Thank you very much. Thank you. Thank you. Thank you.
A few years ago we started adding cross-compiler packages to Tumbleweed, based on our maintained GCC packages. There have been two recent toolchain additions, more are still in the works, and several challenges remain - such as on our end Leap and PackageHub.
10.5446/54518 (DOI)
So, welcome to the POPs at GNOME. I'm Carlos Soriano. I work in a Red Hat, basically as maintainer of Naughty Luz. But to be honest, I spend a lot of time as well, more on the organizational level of GNOME as part of the board of directors and these kind of things. Recently, I have been at the point of contact with GitHub because GNOME is switching to GitHub and with our partnership. I'm also interested quite a bit in the whole experience of developers and users in GNOME. So, I have been working on kind of like a vision with Flapac and GitHub to try to make a full-depth experience for GNOME because before, we have been quite bad in keeping with that. So, the review is going to be the first part, which is welcome to the panel, which was what we were doing before, how we were building applications on GNOME, the stability and buildability that we had on GNOME, reproducibility, the planning that we have, the interaction we can design, QA and users, and there is some fun missing there, anyway. The feedback cycle that we have. Then, for the tools that are fixing this, which is Flapac and GitHub, we will see just the basics of GitHub and GitHub CI and then the Flapac basics. And then, finally, the most interesting part, I think, is when you merge both of them, then you create a full-depth experience with CI and Flapac, Flapac and the reliability, bundles, continuous delivery and the full new cycle. So how we were building GNOME before, have you tried ever to build an application in GNOME before? Yeah? Yeah, I can imagine. So we were using something called GH Build. GH Build is just like a script, a big script that has some prefix for installing applications and some environment variables. That's it. That means that half of the things are on the host and then half of the things we were building from master. Okay? And there is no versioning. So for example, to give an example, to build Naughty Lose, it was 80 modules building from master and it was taking around between four hours and eight hours from scratch. That's a lot for an application in GNOME. And you can imagine when you are building 80 modules from master that some of them are not even controlled by GNOME, it's going to break. It's going to break. So what we were doing for fixing this problem before, good luck. Literally nothing. We had nothing before. And this is really about experience. You can imagine for new contributors and for developers, even for people, distributors, designers, this is insane. So for the possibility, basically we had different environments for developers, designers, QA users, because developers were using maybe a very updated store like Fedora, things like this, the latest Fedora, but maybe designers are not. They are using maybe open source or whatever, the same for QA and users. So the problem is that everyone was in its own environment, which makes things quite difficult. I need to keep time. Let me, okay. Because it's my first sort of talk. I usually do long talks, so I don't want to keep very long. So yeah, I think most of us have seen in the last part in the user's that when they come to us and try to file a report, we say, oh, it works for me, right? This is very typical. And this is because they are using different environments, right? And here there is something interesting. This was the first, like, the guide we have for newcomers. And can you see here, I will show you directly on the here. Can you see here in big? It is strongly recommended to use Fedora 25. That at that point was the latest. And that's it. We only supported Fedora. Very politically correct, right? For now. And I remember Dimstar. I don't know his, okay, you. Yeah. You were not pleased with this. And you were like, come on. This is not motivating. And I agree. This was not a good experience. And it's really bad. But we had no other choice. And it happened that now we have the choice. Then also, this goes inside the experience, not in Flatpak, but still, we were doing project planning just in the wiki. It was just a table with links to bugs. Nobody was updating it. There is no integration. And you couldn't query things like the whole short-term vision for normal or long-term vision or things like this. Which is also very bad for distributors like Open Choose. So, ideally, what we will have for the interaction between designers, QA and users is that, for example, designers have mocaps and they iterate on them. And they should be able to try working progress, like in a branch or something. But they are designers. So you cannot make them build, not to lose, AT modules that probably they are going to fail anyway. They also usually want to see different versions. For example, the development version alongside the system installation. For designers, this is very good because they can see the difference here, if it works, if it doesn't work, and they can iterate on the feedback. And, of course, they are either liter or non-technical. So you cannot give them just a script and good luck. And this is the kind of thing we have before, which is Baxilla. So they attach some image or something. There is no inline support for images. There is no way to try out new things in here, not even the implementation for the signs they do. So, yeah, it was quite bad. So the problem is that everyone was following the same path as the developers. And that's not ideal, right? Ideally, we have early feedback from designers, from users, from QA, that they can try these things just with one click. Everything visual, no command line, and that this path is optimized for users QA, respectively, or designers. Okay, let's go to the second part. So this is what we have until now. Quite bad, I think. To be honest, I cannot imagine now how we have lived in there for so long. I don't know. I don't know how we did it. But now I will explain the tools that solve these issues. Flapac and GitLab. So just the very basics of GitLab is a tool that was made from scratch for a type of experience. So that means everything is integrated. What we have before is Baxilla, Cigit, and Github. And everything is integrated on the same tool. It's very similar to GitHub, if you know GitHub. But it's a bit more powerful and it's free software. As I said, everything is integrated. The whole thing from idea to design to implementation, to continuous integration, QA, continuous delivery, and again, the full cycle of the post. So that's very good for us. And it has support for non-technical teams. And this one is very important because nothing makes me more happy than to see this. Which is right now in the normal GitLab, we have all our teams using the same tool, design, engagement. You want to put in Fedora, translation, developer portal. Even the board of directors are using this tool. So everyone is in the same. You can use labels and things like this. The UI is quite nice. But okay. The most important part and the technical, let's go to the technical part again. Is the CI is similar to Travis, if you know it. It has pym lines. For example, you can have a build, test, deployment, review. There are a few of them. You can have artifacts which is a way to put from the container to the public to probably see something. And there are schedules. So for example, you can do something like every Sunday, I will deploy my application to the users. So they will use an update. And you don't have to do anything else. So let's see how is the CI that we do. No, sorry. This is just a small example. So basically how the GitLab CI works is that you choose a Docker CI image. And then you have the stitches. For example, test. And you just run a script. For example, a Google, which is a generator of static websites. And then you deploy in the pages stage doing the same. But then you put some artifacts, which is the public folder from the website. And that's it. It's very simple. This generates a website, a static website in some link. So I think it's quite nice. It's quite powerful, the GitLab CI. And now Flap Pack. How much of you are know about Flap Pack? You know? Okay, most of you, yeah. So probably you know already the basics. But I will do very fast this one. Basically it uses container technologies like OS 3. It's unboxed. That means it's a box by default. You cannot opt out of that. You can punch holes or make use of portals, which is similar to Android intent. But yeah, it's a sandbox. You cannot just remove the sandbox system. It has a consistent environment, which I guess you can start imagining how this fix things that we have talked in before. Because everyone is going to use the same environment. It doesn't depend on the host. It's like a container, right? So it doesn't touch the user installation. It's also version. So you can have SDKs like Android, for example. So you target a specific version and it's forward compatible. So even if you are using a very new distro with a very old application, that's going to keep working. There is nothing that's going to break it. And because of all of that is cross-distribution. So finally with this, we can say to Dimstar that he doesn't have to worry about new comers to know because now they can use open source and contribute to know freely. So just a very basic on how you create a flatback manifest, which is what defines your application. I will show you now a Naughty Lose flatback. It's very simple. You have the first section, this one. Can you see it there? Yeah, it's fine. The first section is just describing the application name. Then the target is using master and the SDK, which is no. There are Cadi, there are Electron, there are others. And we put some tags. And then we describe the actual application, the dependencies that are not in the SDK. Naughty Lose has actually not many. It's Xif, EXif, which is a wrapper of the first one, Tracker, Nomad to R, and Naughty Lose. And that's it. This is how you build Naughty Lose. It's as simple as that. You do flatback, builder, build, this. Done. And how thin of this is that we have go from the four hours, between four hours, eight hours, six hours before, to six minutes. Now anyone can go to a non-builder, for example. You open a non-builder, usually Naughty Lose. In six minutes, you have Naughty Lose there running. And it's going to work because it's using this environment that is isolated from the host. I think that's pretty good. That's a really big change from what we have before. I'm very happy that we have that. So, again, since it's reproducible as well and the environment is consistent, we don't have any issues that we talked before, like versioning or breaking because some other module master is broken or something like this. And now, okay, I have five minutes. Yeah, it's enough, I think. The last part, I think is the most interesting is when you put Flatpak and GILab together, basically using the CI because the CI is the year, the thing that connects Flatpak with GILab in a way to create this DevOps cycle. So let's see how we do CI for Flatpak and Naughty Lose. Well, Naughty Lose and the whole of it. So it's quite easy. We have an image that we created. It's just, I think, a very, very minimal Fedora image which has the Flatpak SDK installed. So if you have the SDK installed, you don't need to download it every time. We have some variables just for building. And then what we do is Flatpak Builder and we say to stop on the Naughty Lose module. So we build all the dependencies. Then, yeah, we stop. Then we build the actual Naughty Lose. We have to do this because we are doing this in branches. So, for example, when you create a merge request on Naughty Lose, now you will have the CI triggering and then it will build whatever is in there. To do that, you have to build whatever the GILab CI is downloading there, not the actual upstream Naughty Lose, right? So we stop on Naughty Lose, then we build what GILab CI has inside. But everything is done inside the Flatpak environment. Then we install and then we run the test. And finally, we create a bundle that we will speak about this later. But it's quite simple. Then we have some artifacts which is what we show to the wall and it's basically the bundle that we will see later. What is that? Some logs and we say that it aspires after 30 days. And then we have some cache. So every build is going to keep this cache. So basically how it looks like is that, for example, you go to pipelines. Pipelines. And here you can see all the CI for every branch that we have. So you create a merge request and the CI is triggered. That's quite good because now what do we have? We have premarch, build test and runtime test. We no longer have the issue about just putting to master something and now breaks and nobody can build no more and things like this because we had these issues before. Now everything that goes to master is passed through the CI. And since the environment is the same as the developers are using, if it's passed the CI, it's going to be available for any developer as well. So we already fix those issues that we told before. And as I said before, it's quite fast from four hours to three minutes. Now the second part, which for me is the most interesting and I think is where actually Flappack makes a difference here. Flappack together with GitHub is bundles. So with Flappack, you can create a containerized bundle, like an application you can download, like in Mathintos. And then with that, you can install that and run it. So what we do, for example, here, this is a mer request. I created a branch in Naughty Luce. I created just for this talk. I modified, I will show to you here. Can you see the so hidden files here? I modified this label and I put something in there. I create the branch. I create the mer request. And now here the CI triggers and creates the test, makes the test, the build, and then creates this bundle. And using gilab review apps, which is something for making deployments, we go to the continuous delivery, which is exposing the Flappack bundle and is in this link. So this is just a regular mer request. So we click here. It's downloading here. Double click, Naughty Luce the Flappack. Install. Launch. Done. This is Naughty Luce master. This is Naughty Luce from there, from the mer request. And you can see it here. Hello, open source conference. This is amazing. Now the signers doesn't need to build anything. They can't just go there and install whatever we have done. We solved the problem. We finally have, like, a divorce period together, you know. And... So finally, what we have here is that we generate installable bundle per mer request. We have parallel installation. So the signers can see... So you can see the system installation on the left, the developer installation on the right. And they can make difference between them. They can provide any, you know, feedback that they have. And now the last thing is how this goes together. Now you have more or less the big overview. But basically, I will show you a real example. Recently, we had... Well, now we fix also, we still have this short and vision, a long-term vision of NOM. Now we are using these epic labels, stretch labels, to say, like, what's the short vision of NOM, the big task. And one of them is the action bar. We make a proposal of... I don't have water. Okay. Anyway. To make an action bar. So thanks to the Flatpak and GILAP, we had around six designers that we never seen before in NOM. Just random users, random designers from the world that came just to help us with the feedback. Because they could install this quite easily. So we propose something. Finally, we have... Oh, thank you very much. Finally, we have inline support for images. And then with what they did here is that with the designs, the designer put some mockup. And then I create a web branch. I create... Sorry. Web request. The designer clicks install. No, I don't like it. Let's do another mockup. You can see here that there are a lot of mockups, a lot of designers, a lot of user-providing feedback on mockups. So he was doing another mockup. I created a new request. And again, and again, and again, and the iteration was so easy. Because they just have to install anything. And then finally, the last one, this one, and we matched it. And that's it. And it's the first time that really the designers have been happy to, hey, I can't just go here and try things early on the cycle. So yeah, the last one, the pops. We achieve it. And this is for GNOME. But actually you can do it for any application. And I think it's very helpful to have this whole cycle of Flap app plus GitHub. Questions? Okay. Thank you very much. Thank you.
As probably you might know, GNOME hasn't been the most updated in technologies & processes used for the design, development, testing, QA, delivery loop. To be honest, we have been quite behind! Build fails, not passing tests, contributors stuck with trivial details, each product with different released days, designers and QA in need to build the whole stack to try out a minimal UI change... well, we could continue indefinitely. Needless to say this was a huge impact in our performance and contributor friendliness, even more in a time where web applications are as common. Fortunately, things has changed dramatically over the last two years, specially with Flatpak for a containerized-alike build and distribution of apps and our move to GitLab and its integrated CI, we are able to fully dive into integrating a more DevOps oriented workflow. This effort has become a dream come true for GNOME, that we would have never imagined a few years back. In this talk I will present and explain in details how to use and integrate Flatpak and GitLab together to create the future of the DevOps experience for Linux applications development and how we use it at GNOME and what impact is making to our organization.
10.5446/54521 (DOI)
Yeah, good morning everybody. Welcome to the second day of the Open Suzer conference. I hope you had a good evening yesterday at the release party. I mean, some of us had a couple of beer together, I remember. I'm trying to make my talk today without a larger damages or accidents. Yeah, happy to be back here again and that our talk was accepted. I want to give you some insights about the new health community, especially with relationship to new health on Open Suzer. So first of all, let me quickly introduce new health. What is it really? It is less a technical project. It is not a medical project, at least not in the primary medical project. It is more a social project. Why is it social? Because it addresses and works around the issues that are appearing, especially in lower developed countries, where most of the diseases don't have a medical background, but they have a social background. We'll have another talk after this one that deals a little bit more with the details of new health. So I don't want to jump too much into that, but I want to give you at least an overview about the functional areas. So when I'm saying it deals with the social aspect, that means in new health, we're looking at an individual, not only when it becomes a patient, but we're looking at him already before that. So the individual itself and its social environment and the neighborhood, this is addressed with the personal and community parts of new health. Of course, when he comes into a hospital, we need to take care about him. We need an electronic medical record. We need to have evaluation results. We need to have laboratory results, x-rays and something like that. To be able to do so, we have to put him into a hospital. A hospital needs some stuff around like management of beds and rooms, management of pharmacy and so on. And in the end, the fourth area that new health deals with is the health authorities, the Ministry of Health, MOH. So we have a couple of information for them as well. And in the examples later on, we will see examples from basically very small implementation up to very large scale implementations. New health itself is free software. That means it gives us the four freedoms as designed or defined by the free software foundation. The basis technically is Python. The database background, back end is Postgres in the future with the upcoming so-called Federation server. We will use as well the MongoDB. We have Lever Office for an output. The tried in the ERP framework is the ERP back end on which new health is built. Lever Office, as said, is for the output. New PG, we're using for example for the digital signature of prescriptions or birth or death certificates or something like that. And the good free software system is running on the leading free systems like FreeBSD and New Linux. So let's take a look at the community. Who is in the end of the day community? I've divided this into three parts. First of all, those guys who are coding the software were building the foundation. And those ones were picking up the foundation and making it usable. And then those group of users were in the end of the day using this. So I want to start with the first part of the foundation. That's basically the easy one. You remember the free software slide or the slide where you have the running one? So basically we have stuff like Python, Postgres, Flask, Unicorn, MongoDB. These are the base systems where we are running on. We are not really modifying them. Every now and then we have issues that we report back into the community and then they're dealing with it. The part on top of it, the new health module itself, have been mostly coded by Luis Freitran. And I'm very happy that I could move him over here for the second talk about new health. So Luis is from the education. First, he studied informatics. And after that, he studied medicine. So we have a very rare case here that we really have a person that knows what is needed and who is able to put this into life. I mean, most of you have probably already done some software projects and it's always an issue to get the requirement from the users. It's not only that they have their language. You have to get to know the language, what are they really talking about, what is the stock requirement situation, something like that. But then you have to translate it and here we have the case that Luis can move. So Luis will give the next talk. And as said, I'm really happy that he came over here. So now let's take a look at the larger part. This is the part of the users. So most free software projects have an issue. Unlike legacy software, unfree software, you do not need to register. You don't have to pay any license fees. You don't have to sign contracts. And as a good practice, free software doesn't phone home usually. That means there is no need to contact a license or registration server or whatever. So if somebody says, well, but our legacy installation of what do I know system has some 300,000 downloads, then we can imagine that this is a maximum of 300,000 users because each user has to register, each system has to register. But on the free software world, you can make a download and create 20 installations out of it. Nobody knows. So and this is the problem that we're having here as well. We don't know our users, at least for the most part. And this is the big issue that, as I said, most free software projects have. So if you read about the share of Linux on a desktop, don't believe in it. It cannot be right. So the easy part is the users in the academic area. So we have a couple of academic corporations, for example, with the United Nations University in Kuala Lumpur or with the University of Entre Rios in Argentina. And these institutions, new health is mostly used for education purposes. So they are bringing the nurses and the assistants and everything in contact with an ERP software that's used for medical purposes. They're using it partly themselves. And in the future, that is at least a target, we will probably get some bachelor or master thesis on this, where we get hopefully an extension of the functionality of new health. We have one new academic partnership just recently signed. And this is in South Africa. In South Africa is a non-government organization based in Spain. And it was founded around 2011 by health professionals. And they saw the need of solidarity and cooperation. And their main operation area is the African sub-Sahara area. So they're giving training for health professionals. They're supplying sanitary equipment to different centers in this zone. And they're working on the improvement of the health infrastructure and the humanitarian aid. See, this is the difficult word after a couple of beers. So if you have questions up to here, feel free to raise your hand. I'm always happy to answer them. So now let's take a look at the users, where we know we have an implementation. So as you can see here, we have a strong presence in South America, where this is coming from, I will explain later. Here in the Caribbean, Jamaica is a place. Then we have here Gambia in Central Africa. We have a couple of implementations. Let's move over to here. This red spot is by the way Germany. I will tell a little bit more about that later. Pakistan, Laos, the Philippines, Japan, and I have here too in yellow, where we know that there is a project upcoming, respectively, where the project is already ongoing. But it's not live yet. I'm aware that this picture is pretty much incomplete, but it's really based only on these implementations, where we know that they're working on it. So as I said, we have a strong presence in South America. This is based due to the history of this project. Luis was living for a couple of years in Buenos Aires, and there he started the new health project. So one of the first implementations was for the Argentina public health system in the province of Entre Rios. There is a city called Sigui, and there is the Lista Hospital based. The hospital is a fairly small hospital with only a couple of beds. It has laboratory. It has emergency. It has around 25 doctors. And it's one of the eight plus installations that we're having in Entre Rios. The mixture here is different. We have primary care and day care institutions. That means that people are not staying overnight, outpatients. And we're having as well hospitals in there. The result after the implementation, I think, was quite impressive. 88% of the users say they use it always. Most of the users said that the system is easing their work. And three-quarter of them said, yes, it is contributing and it is improving the quality of patient care. In Argentina are a couple of more implementations as well, but I wanted to take this as an example because it was also everything implemented by a local company. That means they are already training their people, which gives an additional positive effect on the ecosystem. Then we have an implementation in Germany. And this is quite interesting because Germany was never the target market for new health. So there is a network of volunteers. It's called the Flasterstube, which offers medical care for homeless people, for people without health insurance, refugees, and whatever. So once a month, this network creates a meal where you can get advice, for example, on debt problems, alcoholism, drug problems, where they're offering something like a haircut, for example, for the people. And where some people also ask for medical advice. So out of the 40 to 80 visitors for each session, about 10 to 15 are asking for medical care. This Flasterstube went live last year. They were coming from OpenEMR and they are quite happy with the setup of it. The whole budget that they had for the implementation was zero, that means they're running it all on their own. So the priest who runs this place is an old free software enthusiastic, very deep into Debian. And he is using this and he did the implementation basically on his own. As this Flasterstube is working a little bit in a gray area and they are dealing with patients who don't have a healthcare, they are not obliged to contact the social security and the so-called Krankenkasse, which are the legal insurances in Germany. Because if you would do so, you would need to have a certification for the software. Otherwise, you're not allowed to run this in a practice or in a hospital or somewhere else in Germany. I don't know. So if somebody sees the need for Germany to pick this up, feel free to support us on the certification process. Same as by the way for Austria. And we have a quite active community member here in Austria in Vienna and he's working on a certification process for Austria. So this is small scale implementation, slightly larger. We have a medical center in Gambia. They have around 14 beds, 40 professionals. Gambia, as you can see, is a quite small country inside Africa. They had first implemented New Health 2.8 that was done by an external IT company, but that was not really coming into proper work unless they got some own IT stuff in there. Somehow they were not really happy with this outsourcing company and they decided to take the implementation into their own hands. So they upgraded first to version 2, 3.0 and later to 3.2. 3.2 is by the way the actual version of New Health. And they migrated to OpenSUSA that the database conversion, because with each version of New Health there are slight changes in there, but database conversion is described quite well. So in between they have it running. They are very happy with it and they are stating that the system is even more responsive under OpenSUSA than it was before under EU. What was the name of the distribution with EU? I don't know, forgot it. So here as well, the budget that they had was minimal and most of the implementation and the upgrade process later on to the current version was done with community help. Let's stay in Africa, Cameroon, the Bafia district hospital. This is actually one of three projects that our GNU Solidario team member, Armand, is doing. So this is quite a large thing already. It has 170 beds, 110 doctors and about 50 to 100 patients per day. So they had around a three month preparation phase and then a two month implementation phase, by the way, on OpenSUSA as well. So the interesting here is the project itself was not run under the flag of New Health. It was run under the flag of the WHO, the World Health Organization. So it was labeled an WHO project together with the Ministry of Health in Cameroon. And I think this is a very successful implementation as well as it will be a kind of template for further rollouts within Cameroon. They did the implementation based on business processes and not so much on functionality which allowed them basically to do this whole implementation process in a very short time frame. I'm not aware of any larger customizing activities that they had to do. So they used it basically as stock as it was supplied, adapted a little bit the headers of the print layout and something like that and it was it. As I said, they're using OpenSUSA for it. This is by the way Amant here with a part of the nursery stuff. Nevertheless, they had a couple of challenges. First of all, for us, it's pretty unbelievable as we are online 24 seven. But in these areas of the world, we have regular power cuts. Not only power cuts in the city, but as well in the hospital. And if we talk about an internet connection, we talk about the old 38,000 modems or something like that. So infrastructure in technical terms is quite bad. The computer literacy is also quite low. That means they had to take a steep learning curve to bring the stuff onto the level where they could really work properly with the system. And then there is another point. An ERP system provides a full and transparency. So you have an overview about the stock, you have an overview about the cash collection and everything. And this transparency is not always welcomed by everybody. So the word behind this is corruption, which we unfortunately see in this area of the world quite a lot. Not only on that level, but especially if you look up the levels a little bit into the government direction. Different here, as I said, it was a project of the World Health Organization to guest with a Ministry of Health in Cameroon. And I'm quite sure we will see some more of this in the near future. So let's move a little bit further to East. We have an even larger scale implementation in between in Islamabad, Pakistan, the Yagbarniyazi Teaching Hospital with 500 beds, 150 professionals, around 250 patients per day. And here as well the budget was zero. They did it completely with internal stuff. The gentleman who was leading this implementation, we know him in between because the community gave also a lot of help to them when they had additional questions in how to customize this and that. But here as well, the in-house team gave the opportunity or due to this, had the opportunity to build up knowledge which is beneficial for the whole area. So these were some examples for single hospitals. Let's have a look at multi-site uses, for example, by the Red Cross in Mexico. The Red Cross in Mexico uses this in three locations in the province of Bocadal de Dio. 24 consultations rooms, 36 beds, ambulances, professional, 800 patients per day. When the project started, they had similar issues as you find them nearly everywhere. So the accounting system does not really comply. No system for inventory control. That means that was probably all paper based until you found out, oh, the penicillin for the next surgery is not available anymore. Too bad. Medical records of patients were still on paper. So that was difficult to find and difficult to keep it updated. The hospitalization needs, that is the word for me today, hospitalization, does not really comply to all the needs and they had a quite well system for emergency calls, but it was completely disconnected from the rest of the world. So after the implementation of new health, they basically got it all under one roof. So first, of course, the medical records, they had an integrated emergency system, by the way, this ambulance system in new health was developed especially for this project and later on then extended into the core of new health. They have cash collection payments, they have the whole accounting, the pharmacy, they can manage the purchasing process for all kinds of products they're using in the hospitals with new health. So this is also a very successful implementation and I think it influenced as well the state of Morales in Mexico which is choosing new health to be their public health system. Even larger scale implementation is Jamaica. New health is the public health system in Jamaica and it's implemented or it should finally be implemented in above 350 places and even more. In Jamaica was also the concept of the personal user ID born. That means each patient get a very individual record, a very individual number which describes his individual very unique. That is needed to have the master data synchronized across all locations in the country. The master data keeping is done centrally here in a central server and from there it can be updated and distributed to all the health facilities. That is working but the learning from this was mainly influencing the further development of new health. So the so called Federation concept which is being programmed at the moment for the next release and where Luis will tell a little bit about it is basically originated here with the difficulties that we found out and where we'll have an improvement with the Federation server. So let's go into the other side of the world, into Asia. Anybody of you been in Laos? So Laos is an Asian country. It's a territory country, it has no collection to the sea and it's mainly surrounded by Thailand in the east and Vietnam in the west. So the implementation there was for the CMR, for the center for medical rehabilitation in Laos. Oops, what's that? Sorry, I clicked on somewhere here. So this is a slightly different representation of Laos. It is a heat map of bomb drops during the Vietnam War. The Vietnam War was between 64 and 73. Probably only a few here in the room can remember that from their own history but maybe you heard of it in the history lessons. So during that time, although Laos was never officially part of this war, they suffered from about 580,000 sorties, which means bomb drops over the area of Laos. More than 2 million tons of bombs were dropped about two-thirds of the Laos territory. And this had the side effect that not only many villages were destroyed and hundreds of thousands Laos civilians were displaced during that time, but as well 30% of the bombs that have been dropped remained there on as so-called UXOs, unexploded ordinances. And this stuff lying there is more than the unexploded ordinances of the World War II bombing in global. So I think from these figures you can get an imagination how Laos was suffering from this Vietnam War. So at the moment there are around 50 casualties per year where something is happening. About 60% of the accidents are deadly immediately. Around 40% of the affected people are children. And over the last 25 years, only 1% of the unexploded ordinances have been removed. As I said at the moment we have about 50 casualties. Some 10 years ago there were some 300 each year, so the number, the amount is dropping, but nevertheless there are still lots of casualties where people lose legs, hands, arms, legs, arms or whatever. And the center of medical rehabilitation in Laos takes care about these victims. In between they've broadened their scope and they're also dealing with standard, let's say, surgeries and examinations. So the center of medical rehabilitation, the implementation there was also a very, let's say, clean one and it could be used basically as a role model. The same local company that did the implementation in CMR did also the implementation for the Maosort hospital, which is a quite old building in Vien Tanne. With 600 beds is also quite a size. And they finished off where they started with the CMR implementation, for example, the complete translation to the Laos language. You can see it here a little bit unreadable for me. And writing from right to left, I think. So these were some of the existing implementation. And now I'm quite happy to announce a new implementation, which is already ongoing, of the all India Institute of Medical Science, located in Delhi. So after some evaluation, they have chosen new health as their ERP system. And now, as you can see here, we're talking about completely different figures. You all in the institute is the largest hospital in South Asia. It has around three and a half thousand beds and about 3.5 million examinations per year. That means we have roughly 1000 examinations per day. And this is a completely new size, which will put our project into challenges as well. I think what makes us quite happy here to read is that they have chosen open SUSE as technical background. So it would be great if we can get in touch with some SUSE engineers in terms of setting this up, load balancing and something like that. Yeah, then we have a big white spot. There are probably more white spots, but this red spot here is actually a white spot. This is China. We could see on the translation server that it was new health was translated into traditional Chinese in about three weeks. Incomplete! And we have, I don't know, 80,000 words or something like that. Roughly, yeah, thanks. But we've never heard of any implementation that has taken place in China. On the other hand, who would do this amount of work in translation if you're not using afterwards? So my guess is there is something going on, but we don't know. White spot. Yeah, let's come to the last part of the presentation and these are basically the makers. So while I was doing this presentation, I stumbled over graphics like this. I don't know. I think one of my WhatsApp contacts or some sent me this. It says, how do you choose an operating system? First question is, do you fear technology? Okay, I'm asking myself, an operating system is something quite technical. And if I fear technology, this is similar to how do you choose the engine of your car if you have no idea what an engine is? But nevertheless, so let's say that if you fear technology or you have no real relation and for every problem, flat tire, you call the road service and your daddy is rich enough, you get the Apple road service. If your daddy is not rich enough, you probably get the Chromebook road service. On the other hand, if you say, well, technology is a nice thing and I would like to deal a little bit with it, but I have my privacy concerns. So if you don't care about privacy, you're using this system with the four things. I think we know what it is. And here's an interesting part. I don't know whether you noticed you have probably because yesterday, a new law in the European Union came to life, the GDPR. And it's quite interesting because Windows 10 is one of the systems that is heavily phoning home. They are sending a huge amount of telemetry data over to some servers in the US and you cannot switch it off completely. You have the option in the enterprise models to set some switches where most of the telemetry data is not being transferred any longer to whatever servers. But the Bavarian data protection authority for the private sector did an investigation on that and they found out even if you switch off everything, there is still data, encrypted data being transferred to servers outside of the EU. They asked Microsoft for a statement on that and Microsoft didn't respond so far. So to my understanding, under the light of the new GDPR, the users of Windows 10 is illegal in Europe. Can maybe somebody of you call up the major of Munich and tell him? I think he will be happy to hear. So as we are all quite concerned about privacy and security, the question is just, do we have a life outside the Linux world or not? Yes, we have one and that is why we are going for the Gecko. But as I said before, we are calling the Apple Road service. Many of our users are really technology agnostics. If you have a doctor somewhere in a place abroad, he probably can switch on his computer, but that's about it. So the challenge here is that we turn the source code somehow into a usable system. And here as well, think about the slide that we had before, the software stack that we are building on base operating system, the Python modules and so on. The work that has been done here, that is allowing those users to run a new health system later on. And everybody of you that has stepped up yesterday during the talk of Ludwig and said, hey, who has provided a package? Who did some testing? Guys, the makers. It's you, the OpenSUSA community. Without your work, it wouldn't be possible to have a system where we have a dashboard-like installation. If we look at the history of OpenSUSA on, of new health on OpenSUSA, that started about two years ago, also on the OpenSUSA conference in Nuremberg, where we had new health already on the build service, but it was not yet fully integrated and not yet shipped with the standard distribution. So two years ago, the peak was given to implement this or to introduce this into factory and into the leap in distribution. That was done. Last year, we made another step forward. The new health on OpenSUSA has the own test suite on the OpenQA. That means every leap distribution, every tumblebeat that is being built is on these, the Open, new health installation is tested with OpenQA. That was also result of last year's OpenSUSA conference and step up, Oliver. Thanks to him. He did basically the implementation. So another thing where I feel that the platform SUSA is quite unique is this little thing here. New health runs fully on a Raspberry Pi. It has 8Ghz SD card in it and a little ARM processor, but due to the fact that we have multi-platform support here, we could implement new health there. That was not really difficult to be honest, because the main work to get the system up and running, to have a graphical user interface there was already done by the OpenSUSA community and putting the new health packages onto that was basically a piece of cake. And the use of raspberries with new health can be quite in many areas. For example, we can use it as an interface to laboratory devices. We could use it in domiciliary vector control. We could use it as an autonomous new health system, which is communicating to a federation server. Keep this in mind. Luis will tell you more about that. So that means new health on OpenSUSA is successful because it has your support. We have a really great support from the SUSA community. So as said, we ship it in the standard distributions. By the way, this box is currently running on TumbleWeeked. We have it tested. We have the option for a one-click install with a cookbook for a setup. We have cross-platform. And not to forget, SUSA is a sponsor of the health conference, this one here, which will take place in November again. Nevertheless, that's the sad side of the story, we've lost also a friend. Anybody remember who that was? Oh yeah, Christian, of course you do. Dister, that was the mascot of the SUSA studio. And SUSA studio allowed us to set up a life system with new health, with a demo database, that you could start from a CD or you could start in a virtual machine, you fire it up and you have immediately a running system that you could use for testing or for education purposes. Studio was switched off last year, I think. And Studio Light was introduced. Studio Light is running on the build service, but the focus is slightly different. For example, on Studio, we could have a defined snapshot of packages, whereas on OBS, similar as in the standard procedure, once a package changes, it recalculates and rebuilds the whole ISO image. Additionally, the behavior is slightly different. Up to now, I didn't really find the time to set up a new live CD. So if you're planning to support somehow, maybe in building a live CD, feel free, give me a call, I will be happy to introduce you to it. So if you're looking for some more cool stuff to work, here are a few areas of that. Triton. We're currently maintaining the Triton versions on OpenZusa. It's on the build service. And I'm looking for a new maintainer for the Triton packages. Triton packages means, per release, we have about 120 to 140 packages to maintain. And I'm looking here for a successor who could take over the work. If you're more into user interface stuff, you could, for example, step up and work for the Mind New Healths PDA application. That's a cell phone application that allows the users to maintain their personal medical records, share data with doctors, exchange data with health centers, and so on. Information server as well as Raspberry Pi have many points where they could connect to other devices. So interfacing stuff here could be quite helpful. If you feel that new health can be beneficial for your country, feel free to contribute, for example, for translation services or something like that, or improve the documentation on Wikibooks. So there are a couple of areas where we could need some help. And with this, I will come to the last two slides to bring up a little summary. And I found out this is quite similar to what Richard yesterday presented in his presentation, those who do decide. Although we have a slightly different focus. To my understanding, the usage of free software should not only be a take, but it should be as well a give. So just take the software, take the packages, install, and then hide away is probably not best practice. And this is the sad part of the inquiry that we've done with the new health community from the user side. We really got very low feedback. And this is quite sad. And if somebody sees the recording out there, please file your information. We're still happy to hear it. So what makes a great community? First of all, get involved, do something, share your experience. Every implementation is a learning. Every other next implementation can benefit from the learnings that we had before. Stay friendly and be helpful. I think these are two very basic things that we need to consider. Richard mentioned that yesterday in the discussion, how do we deal if we have a conflict? To summarize this in a single slide, a community has always to do with humans. And with respect and be friendly, otherwise they make it strange. So with this, I want to close my presentation, not without pointing you to the health con coming up in November in Las Palmas. So if you don't, if you feel that the winter is coming and you feel you need some refresher in terms of new health, come to Las Palmas in November. Thanks a lot for your attention. If you have questions, I'm happy to answer. Anything? Oliver? It's on now. My question is based on the success of all these projects that you have shown, which have been already implemented, is new health considered a competition to other potentially competing commercial closed source or maybe even other free software products in that area? Definitely. Definitely. And this is a difficult case as well because there are also free software projects that are promoted by so-called NGOs, but with a certain commercial interest in the background of selling additional stuff, selling additional licenses in software in some way. So they are for the part not 100% free and they are also having a quite huge backup in terms of money. And of course, as you could see in the scale of implementations that we're having, it is definitely a competitor also to commercial projects or commercial products. Nevertheless, the concern that we found here is that the officials or the responsible persons for the product haven't completely understood the idea of free software so that you can take the product and then choose the service provider around that versus a commercial product where you tie it to this one service provider and if he grabs you by the neck, then he got you. Whereas here, you can always choose a different company or to support you. Anything else? Thank you very much. And now I'm happy to welcome Luis, Luis Farracón, who is basically the originator and main developer for New Health.
GNU Health is a community driven project. There is a wide spread variety of users that run GNU Health in different scenarios. And there is a community of 'makers', that build the software and brings it to its users. This presentation will put some light into both communities. For the first time we have collected the end users, and will present some statistics around that. And for you, the maker community, we will give you some ideas whats next in the development pipeline
10.5446/54525 (DOI)
Hello everyone, my name is Frank, also known as Moses. Today I'm going to talk about one of my favorite projects, it's called Cancun and this talk has the under title, Bridging the gap between OBS and developers. I started working as OBS backend developer in 2015 at SUSE and at this time we didn't have integration tests for our application images and the first questions I would have to the audience who of you is using OBS actively? Okay, the second question would be who already built images, KVM images with Kiwi in OBS? Okay, and who of you knows Vagrant and used it? Okay, today I want to talk about the motivation behind Cancun. I would like to give you an overview of the modes of Cancun and we will talk about the basic concepts and the architecture. The motivation and goals, as I already said when I started in the OBS team we had no tests for our OBS appliance images and I made some changes in the setup and I wanted to test them and then I started with a small proof of concept just to fire up a virtual machine via KVM with Lippwirt and do this regularly scheduled to have a nightly build or a nightly test of our builds. After a few weeks, this POC was really getting more and more working for me and then I thought about a command line tool to use this on my laptop and this was the time when the developer mode was developed. The overview I will give you, I will try to explain the developer mode, the server mode, the basic concepts like the jobs, the tasks, the handlers and the utilities and the basic architecture, the demons, the activities and the components which work inside Cancun. Now I will start a short demo. Here we see the Cancun init command which creates a basic configuration file for your Cancun job. Normally you run a Cancun app which downloads the image from the OBS and fires up a virtual machine in your Lippwirt environment on the local machine. As you can see, you can do a Cancun SSH to log into this machine and start developing. See how the visual help that you can create. Showing up the Doormat OS Io infinite So, if there was a short interruption. The developer mode is designed to make it easier for you to work with the images you created on your local machine, to get an environment where you can start straight on developing all packages are installed and you only have to check out the source code or for example can't do this also for you. So only start developing. Jobs are triggered manually in this state. This is can't up command. We will see later in the server mode which other trigger modes are possible. Later on I invented offline mode which is very useful for me because I'm traveling much and then you have a cached image on your laptop for example and can fire up a machine while being offline. Can't who can also share your project directory with the virtual machine so you can switch between developing on your local machine and run the code inside the virtual machine for example. In the server mode, there are two modes in the server mode. It's distributed or it's stand alone. Stand alone means it runs only on one machine or you can scale over with more than one server to make it more scalable. The jobs can be triggered or they can be scheduled. There is the can't who schedule which enables you to run your tasks regularly or they can be triggered event driven for example by Revit MQ or via the UI where you just start by click your job. For me it's very convenient to fire up a virtual machine on the server so I can develop with my colleagues on this virtual machine without giving them access to my laptop or something like that. Now we can come to the basic concepts, the jobs, the tasks and as you can see the handlers and the utils. We start with the jobs. A job is a set of one or more tasks. You can use loose coupling. These tasks are normally only one handler is specified but we come to this on the next slide. There is a job context which is used for the communication between the tasks. For example if you run an OBS check to check for the latest image which was built inside OBS this information is stored in the context and the next handler like the image download can take this information, download the image and stores the past to the local image inside the context for the next handler like the create domain handler. A task is run exactly one handler with defined options. Here you can see a small context snippet for the create domain. In this case it is only the domain name. It is in the corresponding handlers in the source code. In the installed source tree, tasks will only execute at once but you can have several tasks with the same handler. As already said there is always a job context which stores the information for the communication between tasks. Here we have the handler classes which are normally located under opt cancule, lib cancule handler. They normally have three methods, prepare and execute and finalize method. They have multiple distribution modes. This is required if you run it in a distributed setup. For example there is a port forwarding handler which can be run on the master server and can forward port to the virtual machine. So it can easily access for example the web server or the SSH server from the virtual machine over the master server and the information is displayed in the web UI. Then there are worker only handlers like create domain which you only run on one specific machine or a handler can be run on all servers like remove domain because when you run a remove domain you want to remove this current instance from all of the servers that you don't run into name conflicts after restarting the job. Then we have the util classes. They are mainly helpers for the handler classes to reduce the complexity or to make functionality available for all the handler classes where you may use it. So now we come to the basic architecture and the jobs get triggered. They get created in an SQLite database. As I already said there is a trigger daemon which can read from a message pass, the web UI or the CLI command because all of the commands you can do on the CLI, you can also, everything you can do in the web UI you should be able to use the CLI for it to trigger it via REST. Then we have the concordis patcher. In distributed mode it uses rabbit MQ to distribute the workload over the several concordirker machines. Then the job is running and the virtual machine is created via lib-wirt commands. So here we prepared a small cheat sheet, how to install cancun, how to set it up and run your first project. This will be available on GitHub. If you recognize the URL and the QR code at the beginning, this is the URL to this presentation, there you can find the cheat sheet. Yeah. Please test it, use it or if you want contribution, are very welcome. Or on GitHub you can also open issues or if you have some questions you can find me as Moses on free note or also in the public available documentation on GitHub. You can find further information. Yeah. Are there any questions? Yep. I haven't understood well because I don't have enough knowledge of the backend of OBS. So does cancu is the tool to be able to make the build process of my images on my laptop without needing an external OBS appliance? It's not to build your images on your laptop. This you can do with OSE. But once you checked in your image or your image description and OBS built an image for you, then cancu can help you to fetch these images from the OBS, the latest image and fire up virtual machine with exactly this image. This is on to your question. Yeah. Thank you. Okay. Any further questions? Okay. Thank you for your attendance.
A convenient way to work with your OBS built images kanku is designed to give you a better integration of your kiwi images built in Open Build Service (OBS) into your development and testing workflow. It provides a framework for simple automation of complex setups, e.g. to prepare your development environment or run simple tests. This talk will give an overview of the motivation/goals and basic concepts/architecture of kanku.
10.5446/54531 (DOI)
So, I think let's get started. Well, so today I am going to talk a little bit about user experience in open source and let's talk about testing. So my name is Santiago Sarate, I am a QA engineer at SUSE. I work as a backend developer for OpenQA. So the first thing is why this talk? This is not a technical talk, this is a talk just to talk about why we find over and over again the same people or different types of user asking questions in Stack Overflow on sometimes we see them in GitHub issues that they are asking, hey, I found this problem, what is happening here? And then there is all these same people telling them all over and over again, go read the manual, you did something wrong, you don't have the right application installed, go to the console and run these commands, try to do this or try to do that and then the user ends up spending, I don't know, maybe half an hour, one hour, even days trying to do one simple thing. And this is the problem with this is that sometimes these kind of users are potential contributors because if they want to install one application, if they are taking the time to go to talk to you, if they are taking the time to write somewhere, hey, I need some support here, maybe it's because they have a real interest and if they are trying to, if they are coming back all the time and saying, hey, I tried your solution but it didn't work, I tried this but it didn't work, it means perhaps that there is something missing somewhere in between and this is where the user interaction comes into play. And it's not user interaction, it's also user experience. The user experience will always unravel how awesome of a developer you are, it will simplify the need to explain the better the interaction you have within your application, the better your interaction is with the user, the less you have to explain, the less text you have to write. It also enhances the credibility of the developers that created this piece of software. So for the next time that the user tries to upgrade the application, the user is going to see this and say, maybe I will just upgrade because I really think that the developers are caring about me, they care about my time and I also would like to try to contribute back to them. So let's add the word revenue because I just need a buzzword here. If you read the first letter, you say user, which is the main focus here. We just want to, sometimes when we are writing software, we just want to make our users happy. If you are just the sole and the lone user of your application, perhaps you don't really need any type of users or nobody telling you, hey, this is going wrong. Therefore, your application might just not grow, your application might just not get better and this will just make anybody not wanting to use your application, your software. And this is all about getting feedback and bringing all the users into the yard, sharing with them, talking to them, trying to make sure that whatever they feel that they need, if it makes sense, you can implement it, you can add it so that they feel that they are hurt and they will use your applications a little bit more. They will try to contribute even more. This can be by tutorials, reporting bugs. We all know the story here. This might sound like a BS bingo, but it's really not. I guess everybody here has ever heard of LibreOffice and GIMP. So GIMP, in particular, they started to hear their user base. In the beginning, it was this interface with only one big square where you would draw, which was your main workspace, and then these two bars on the side. Over time, they found somebody who was willing to take the burden of sitting down, listening to the users, coming up with interaction design, coming up with new UI, and then since version 2.8, I think you also have a new user interface in GIMP that you can select and you can change. So they started to listen and to take their user base more into account. They also came up with new ideas and a good project vision. They don't want to be the replacement of Adobe Photoshop. They want to be one of the best image editing software available that is free and that is open source. This is their project goal. They are trying to achieve this by listening to their users, improving the user interaction, improving the UX writing, because it's not just making sure that the user is doing less clicks. It's making sure that the user understands better with less information when he's trying to interact with the application. And they are also providing a very good roadmap. And here roadmap is the key. As soon as you start giving the users a roadmap, they feel like, okay, so I know when to expect something in particular, if there is a roadmap at all. They started with this roadmap, I think five years ago or something like this, and they have been improving over time. And if you go back into the deep on the wiki, you can see that they are just adding more and more features and they are changing the release date. They are working now towards version 2.10 and they are almost halfway through it. Now we have the LibreOffice part. I think we all have used LibreOffice at one point. In fact, I am using LibreOffice right now. If anybody remembers any version before 5.54, they were just difficult to use. But over time, again, over time they started to work on their UI, on their UX. They came up with a very, very amazing document on how to create new dialogues, how to design the interaction with the user, how to design every single UI component. And the people from the GNOME foundation is also doing something very similar. So they are investing in making sure that the users, that the people that is actually using their application actually feel encouraged to use it. Because I think nobody wants to use something that just simply looks horrible. One thing, as one of the things, is that they have a very well-documented QI process. Between all of the open source communities that I know, LibreOffice and Mozilla have the best documentation when it comes to QA. They even have a small document on how users can do Git by section. So, and it's like four or five lines of a document where somebody who wants to do some by section just can sit down even without knowing. Because they are just running a bunch of scripts and all they need, all the people from LibreOffice need is just somebody to run those for them. And they are always and continuously empowering users to give feedback. They are always telling them, if you find an error, if you find a problem, please go unreported. They open a Telegram channel, they have their IRC channels, and they have a very active community also on Twitter, and they also have their own Ask application. And there is a very big user base. Also, there is a lot of developers, and there is a lot of money involved. But the main thing here is that the more you give to the users, the more you involve the users into your process, the better your application is going to be. Also they have their release plans, and they have a very clear release criteria. This is, again, just means to an end, just giving users the possibility to contribute, even to say, hey, maybe this point of the release criteria is just not right, let's move it before or let's move it after. This is one of the screenshots of the release map for version 6.1. And again, roadmap here is the key. The release criteria is also the key. And empowering the user is also key for communities. Without empowering users, we cannot do, we cannot grow. And now if we sit down and we try to mix what I've been talking with what I'm trying to say here, so it has to do with everything. If we sit down and we start to think, who's using actually the software? We have AI now using software. So I don't know how many of you have an Amazon Echo or a Google Assistant or use any of these things. But internally there is some software that is running there. Obviously humans are using software. We also have animals. That picture from there is the project of a guy, some engineer that decided to hack on his Raspberry Pi, very small image recognition software so that when the cat was standing in the door, it will open the door just so that the cat can enter. So again, it's not just a person sitting behind a computer. It's anything that can interact with the system. And obviously we have robots as well because it was on dynamics and stuff. So just imagine a world where your applications cannot interoperate. So we cannot have Google making a call to a hairdresser and then giving the feedback back to the user that started all the things. Imagine a world where you cannot install applications. I guess you have heard of serverless. This thing that basically nothing, you distribute nothing, you code nothing, you install nothing. I think it went somewhere along those lines. I don't think it works for applications and for end users. End users need to install something. And right now you have Flatpak, for example. And Flatpak works amazing. Actually the library office that I have, I'm running Gen2 here and I installed it through Flatpak. The thing is that you need to install the installer. So there is something there. Then just imagine that you have a world where you cannot improve applications that you cannot say, hey, I found a bug. I need a fix here. So that's why we have things like Windows where we need to wait for ages before things are sold. And then we have the world where applications aren't known. There is a lot of amazing software running out there, free, publicly available. But either people doesn't know about the software, people don't use the software, but this is because nobody is using it. Nobody uses it because either they cannot install it because they cannot give feedback to the creators or because simply you cannot interact with the software. So you know, software is left forever alone. At one point you might think that this is ridiculous. There is no such world, but many of us have tried to install at least one of the two next applications and on top of that Eclipse, I think. So the first one is Genie. If I want to install Genie right now on my computer, I just need to, okay, so let's say I don't know anything about Linux, so I go online, text editor, Linux. Among the first results, one is going to be Atom, the other is going to be, I think, Genie. And there are like three others that are also going to come. Sublime text is also one of them. So for Genie, I just need one click to get the information on how to install it. And I get the actual instructions. It's just very easy. Even for Gen2. If I want to use Atom, for Atom, I need one click as well. Just download the RPM or download the depth. And nowadays most distributions are already integrated. So if I download the depth or an RPM, I can already install it. There is not much there. So it's easy for me as a user. But on the other hand, if I need other binaries like for Windows, I just click in other platforms and it takes me to GitHub and I can download the XFAD. But on the other hand, we have Eclipse. So for Eclipse, first I went to the website. I click download. And the first thing I get is a turbo. Who is installing a turbo in 2018? That is not a package or that is not running, I don't know, Slugware or Gen2. And even for Gen2, there are ways to not use a turbo. So you also need more clicks to get to the instructions on how to install it. And then on top of that, if you are going to develop for Java, you need a specific version of Java, if you are going to develop C++, you need a specific version of the Eclipse turbo. And it depends on what you are going to do with it, what you can download. And then it says, yeah, well, you can have it and you can download it through your distribution if it's packaged. And I don't really know anybody who is using Eclipse in these times, but it sounds really painful to use it. And the problem is that for most software that is out there, usually the instructions that are built are built only for the developer who created this because he's not thinking or she's not thinking about redistributing the software. So let me put it on GitHub and whoever wants to install it, they can go through the pain of installing this software. And well, yeah, it makes sense if you don't really want to improve. It does make sense if you want more people to use your application. So what's my point with all of this? And going back to the previous slide, just imagining a world where users are just downloading a turbo. And let's say this is one of those users that doesn't really know how to use a terminal. So they don't know even how to use a tar, minus, XBC, and then the name of the file. So when they see this, they don't even know where to put this. Of course, yes, you can tell the user go and read the manual, take a look, see how to install applications, see how to do this, see how to do that. But then again, you are going to have the same user coming to you again in the next week, perhaps, saying, hey, I was trying this. I tried the whole week. And I don't really understand how to install this application. And if any of us have ever asked their parents or their family, somebody from the family, hey, just run Linux, you know the type of questions they can come up with. And again, this is all about user experience. You need to write. I would really like to recommend these people to write documentation that is clear, that have instructions. Sometimes the user, when they go to the Getting Started Guide, they are not looking for the philosophical story on why you should use Bash instead of CTSH or vice versa. They are looking for the specific command that is going to allow them to run that application. So write something in the Getting Started Guide that is basically saying, okay, run this command. And that's it. If that doesn't work, then go to the troubleshooting guide or go to this place and ask, which is kind of easier for the user to understand rather than trying to figure out why is it not working on the distribution that they are using. You also need to listen to your user base. If you are always having those users coming to you and telling you, hey, I am trying to do this in this distribution and it doesn't work, there is something going on there. So improve perhaps the installation guide. And perhaps the guide where you are guiding the user where and how to configure one part of the service. Allow the users to collaborate because maybe there are some users that have a lot of technical knowledge so they sit down and they do it by themselves. And they can just simply sit down and say, okay, so I was trying to do A, I did it, and then no, I want somebody else to use A again. I want somebody else to not go through my pain so that they can just simply leave and go on with their lives. And of course, you need to test. You need to make sure that whatever you are putting there works. You need to make sure that whatever you are distributing works. And you need to make sure that whatever the user is trying to do will work or if it doesn't, there is an error. And he or she understands that there is a problem, that there is something that failed and that there is something that has to be done. So if you still don't follow what's the point, the point is that you actually need to test and you need to make sure that your documentation is up to date because if you don't update your documentation from the last three years and then suddenly, I don't know, there is a new version of the programming language that you are using to build the software, you might end up with users telling you, hey, this is not compiling. And again, if you don't test the documentation, you cannot know or you cannot catch these kind of problems. And this you can automate. You can actually, if you are using, for example, Markdown, you can just simply write a parser, take out everything that is outside a code block, execute only the code blocks and you can do this in an automated way. And it will take, I don't know, 10 minutes. And it's just about listening to your user base. You can crowdsource, crowdsource, so set up mailing list, set up forums, set up maybe not as many things as you can, but what you can actually afford. If you can afford a Twitter account and a mailing list, then great. Just let the users know that it's there. Because certain users don't think that something has support just because they don't find an email address or a Twitter account or a Telegram account nowadays, even a Facebook account. So it depends on how much you want the users to actually engage you. Also, empower them to take part in your process. Allow them to sit down and say, hey, you know, the last version of the software that you released has a lot of bugs. And I really would like to help because this is becoming important for me. So just allow them to do that work. Many of us here work already in one or two big pieces of software, and maybe one or many of us have written at one point something that is being used by somebody else, even if it's a small thing. And as soon as somebody else is using that, as soon as they are allowed to give you a review, to give you feedback on whatever they are using, how they are using it, if you take that into account, you incorporate that into your software. It's going to empower them, and it's going to allow them to say, hey, maybe there is something here, maybe I can keep still doing things here. And they can stick around for a little bit more. And of course, it's about making sure that the process is working. So again, about empowering the users, well, I still don't follow. So let's go back to this one. So LibreOffice, right? They are sharing some things. They are sharing a roadmap. So if you are a developer and you want to, I don't know, plan some work, share it, at least share what are your ideas. You don't need to come up with dates. You don't need to come up with very detailed information on how to do stuff or what to implement. So the big idea, think that and then start to slowly chop it down into very small parts. And then, of course, take feedback from your users. If a user is telling you, hey, maybe this is just not going to work or whatever you are planning there is not going to work for me, well, sit down, listen, see how you can improve. Make sure that you make clear the release criteria. When do you think that the software is ready for being used? Maybe for me, the software is already being used. It's usable when the software is passing all the tests that I wrote or when I go to my specific use case, it doesn't crash. It gives me the error when it shows. So everything is fine. That's my release criteria. But maybe the user doesn't know it. A user doesn't know when a nightly build is ready to be used. A user doesn't know when the beta build is going to be considered like a really good candidate. So if you make that public, if you share it, maybe perhaps a user can say, okay, I think I can get to an early stage and start to push for something to be fixed because I found a problem. Because maybe the way we are testing is not the same way as the users are actually taking the application and, you know, crushing it because that's what users do. And obviously, you need to write some documentation, decent documentation, and always try to improve it. It's not just the software that needs refactoring, but you also from time to time need to refactor your own documentation because it gets defaced. At one point, if you don't touch it, if you don't go back to it, I don't know, six months, one year into the future, somebody is going to come and tell you, you know, this is just not working for me. So what can we do? That's the end of my talk.
Very often we come across a masterpiece of software, while now days almost everything is cool and built for the web, or built with technologies that are changing every single day and moving forward, but often we forget how the user feels when new software is available to download and install, it can be an overwhelming experience. This talk is meant to talk about how some projects show the true meaning of a Venn diagrams and offering a bit of guidance on how to make testing and user experience even better for your own project.
10.5446/54533 (DOI)
Hi, everybody. I hope you enjoyed the lunch. My name is Michal Hrushetski. I'm from a company called CZNIC. I'm going to talk about open source routers that we are actually making. I want to show you not just the routers themselves, but I would like to speak a little bit about why we do it. Actually I would like to stress out a few pieces of software that we are using and how doing open source actually makes sense and how it enables us to do awesome routers thanks to open source world and communities around the world. So a little bit about our company. It's well known in Czech Republic because we are actually Czech top level domain registry. Legally we are some kind of association of companies. But in fact in our bylaws we have some statements that we are basically run as a non-profit. And the association of companies is because Czech technical companies founded us and they are competing with each other, but they all agree that they need a stable Czech domain and stable internet and we are supposed to make it happen. Apart from domains we are doing some other interesting open source development. We are doing birth internet routing daemon. We are doing Knot DNS server and DNS resolver. We are also doing some education. We are publishing books about IPv6, gate, liberal face and stuff like that. And we have also some TV series that teaches people how to use internet in a safe way. So we are trying to do good and educate people about what to do and what not to do on internet. And we are also doing those Wi-Fi routers. How did we get there? And why we are actually doing routers? It all started with good old tourists. This is the first router that we made. It's power PC based, it had two cores, two gigs of RAM, some storage, quite big for a router. The community routers that you buy in grocery store have something like 8 megs of RAM if you are lucky. So it was quite powerful hardware and our goal wasn't actually to make a router. We just needed a powerful hardware that we can give to people. We gave it to people in Czech Republic. And the only condition was that they will allow us to spy on them. Basically our goal wasn't to sell more ads or something like that. But our goal was to actually figure out who is attacking home users and how is the security at people's home with the internet. So we developed some software that reported some suspicious activities, send us firewall logs and even acted as part of Honeypot. We get to the Honeypot later. And we got all this data and we did some security research on top of it. Apart from that, since we were doing router, we decided to do it right. So we provide security updates and not only security updates but also feature updates. We are not restricting features based on model but whatever can run, we are trying to provide it even to those old routers, even the new features that we are still developing. And we obviously give people a root account on their router because it's their device. So they deserve to be root and they deserve to be in control of their devices. So we did this and then our colleagues were going on conferences around the world and presenting what security research they did and what they found. And they found out that apart from the security research that was really interesting for some people, people were really interested in the router itself because they said that it's cool device and they wanted it as well. But we basically did the device by ourselves. We gave it to people in Czech Republic because it was financed by money from Czech domains. So we didn't felt it was right to give it to anybody around the world when basically Czech people paid for the domains and gave us the money to make it. So we did something that people can actually buy and it was called Tourismnia. I had a talk here two years ago when we were building it. It's something that people can buy. All our spyware is optional. You can still join our security research program so you can still send us all the data that you collect and help us improve security on the Internet. But it's your device. You can do whatever you want. You can reinstall it. You can run different distribution like OpenSUSE. And it's even more powerful. And it's interesting, one other interesting part is that it has ARMv7 so it's much more mainstream platform. And it has some switch, plenty of ports, SFP, some PCI express slots, M-SATA. Basically we put in everything that we thought that might be useful. So it does everything. The downside of doing everything and having everything is the price tag. It's between 200 and 300 euros. And people were saying that they really love the device but they would prefer if they can... they don't need all those stuff that we put in. That they would all like it just with one PCI express and they don't need all those Ethernet ports and stuff like that and can we make it a little bit cheaper, please? So we tried to make everybody happy. And we came up with a new router that we are currently running on Indigo campaign. And we actually made the goal so it's going to be done. We met the goal something like last week. And yeah, we are trying to make everybody happy and how do we do that? Well, we decided to make it modular. So yeah, I have it here. So basically what we have is the CPU module. It's this one. It has just CPU, USB 3. And it starts from 29 US dollars. And yeah, it has gigabit Ethernet. But if you need more, you can just buy another extension module, plug it in and you can have another capabilities. Like this is a PCI express with Wi-Fi card. We have some switches. Some of them can be even chained. We have SFP. So you can decide what components you need and you can extend it as much as you want. Well, there are some technical limitations, but within that you can extend it. And if you don't need those additional interfaces, you don't have to buy them. And yep. So is it still a router? Well, it can route. It can have even 25 Ethernet ports. It can have Wi-Fi. So it is a router. But in more general sense, it is actually kind of a single board computer that can be made into a router. It is a... It doesn't have any GPU. So it's not your multimedia center. But it has gigabit. It has USB 3. So it has plenty of extensions so you can make it what you want and it can be even your home server. So little side note. It comes with U-boot at least this version. We are actually pushing all the patches to upstream and quite some are there already. So we will probably at some point rebase even on U-boot. It has Marvel Armada 3720 and it will be shipping with 4.14 kernel. So it looks promising from open SUSEP point of view. So let's take a look how it actually works. Ludwig did a library. So I decided to do at least a live demo. So we have a graph. And let's see how far we will get. Basically what we did, I was speaking with Andraus here. We downloaded open SUSEP. Handle it, image, basically put it on the flash drive. Currently we copied DTB on the flash drive as well. But in the end we hope that we will somehow handle DTB inside U-boot in the production unit. So hopefully you wouldn't need even that. But it's starting, booting. And hopefully we will get to the happy ending. It worked. Sir? You explained DTB, PCB. Oh, yeah. For those who don't know, there is something called device tree on ARM devices which basically describes the hardware. And it's called device tree binary is DTB. And yeah, all those devices have support in the kernel. And you just need to say which peripherals are where, connected, and how to access them. So that's what DTB is for. We have SPI nor where it's already U-boot. We will probably have somehow DTB inside as well in the final unit. This is just a prototype. So basically when we copy the DTB over to Tumbleweed image, it booted like this. And we have open-soccer running on top of it. But apart from that, we are developing our own system for it. And we want to make it not only secure but also user-friendly and bring actually security not only to advanced users but to some home users. And we are trying to simplify those security features that you might be interested in so even beginners can configure VPN and stuff like that. So one use case for that and what we actually decided that would be a nice fit is NextCloud. Some people are asking for it. And actually it makes sense to combine our efforts because what NextCloud guys are doing is they are trying to get you easy way how to solve host and manage your data in private way. So you have control over your data. And what we are trying to do is make sure that your home network is secure and that your gateway to your home network is secure. So we actually have something for kind of ultimate cell hosting, something that can be running in your home. It doesn't take that large amount of electricity. You have a root account on it. You have those automatic updates. So it's always secure and we have some additional security features that we are providing. We have a nice web interface that allows people to actually set up the device from the start. So yeah, it makes sense. So we, as I said, we have a nice easy open VPN support so people can create a VPN server on our routers really easily. There is a web-based wizard that will basically create certification authority for you and will let you generate client certificate and client configurations. It's not that hard, but even though I know how to do it manually, it's tedious work and yeah, you don't want to generate SSL certificates using open SSL. And remember all the parameters and in months you will forget how you did it. So we did easy open VPN support. We have automatic updates that gets downloaded and your router can be updated regularly. You can just set it up once and then it will stay up to date and all the security issues will get patched. So even the, oh, well, we have our next cloud packages ready. So your next cloud will get updated even if you access it just via the next cloud application on your cell phone or something else. And even if you don't log into the next cloud, you can have it updated automatically by the system. Newly, we have even some web interface to actually format and set up hard drive so it will be mounted. And there are still some things that we plan to do and that we are working on and that's extend the web interface to actually allow setting up a rate and make it even easier to install the next cloud from the web UI. So that's what we plan to do and it makes quite sense with Mox because, yeah, it doesn't have to have all those internet ports and Wi-Fi. It can be just a NAS box. We actually have an extension module that provides four USB ports so you can connect hard drives from the grocery store that you thought that you would be buying your router from. One thing that we had on our routers from the beginning was kind of honeypot. And it was something that we dedicated to our routers. It was one of our cool features but people kept asking whether they can run it even on their server and we actually made it possible. Now, it's called Honeypot as a service. And basically how it works is that, yeah, honeypots are cool and fun and it's really fun to lure attackers into your honeypot and see what they are doing. But there is still some small risk. If they are clever enough, they might try to break through and they might escape at some point. So the solution is let somebody else run Honeypot for you and we will do it for you. You can register on our website and install simple proxy package. It's available in lib15 and double-lead. And this proxy package will basically do many in the middle on the attacker and send him to our servers. So he will think that he will actually attack your router and he's stepped closer to building his own botnet. But in fact, he's running on CZNIC servers in Honeypot and you are just providing him there. To show you how it looks, this is the website. And when you log in, yeah, we have some global statistics. So even if you don't participate, you can see who is the most evil country. And you can even get some data of what attackers are usually doing. But if you register, you can add your devices and you can see who was attacking actually you. And what password were they using and how many commands did they run. And you can actually take a look at the whole session what the attacker was trying to do. And you get also statistics per your devices. So you can see that your router is hated by China and your server is hated by Poland and stuff like that. So it's fun. So that's one of the services that we developed for our routers. But then they split to separate project and now you can use it even on your server. And you don't need the router to actually use it. Another thing that allows us to do quite some stuff on our router is open source software called Surikata. What is it good for? It's good when you're a simple firewall where you just block ports is not enough. And if you want to have some more information about what's going on in your network and basically what's going on between internet and your network. It's intuition detection and prevention system. It works with some kind of network flows. And it can. Okay. Sorry for the delay. But it's run out. So where was I? Surikata. It can look much deeper inside the packet and it can actually parse the data that are going to and interpret them somehow. So you can log the information. You can then process it somehow. I think on the first day there was a talk by Peter Chanik about syslog ng and he showed quite some examples what you can do with logs that contain interesting data. And yeah, he loves to play with logs. And in his talk, he actually made it sound interesting looking at the logs. So if you get the data, you can do plenty of stuff with data. But apart from logging, what's going on and doing some statistics and maybe even getting some alerts. It's also interesting that you can actually write the rules. You can detect the stuff that you are interested in, get some alerts, and you can even block the stuff. So what stuff you can actually do from the traffic when it's actually most of the time encrypted anyway? So you can get all the unencrypted communication, which is basically nowadays there is a DNS. Most of the time people still don't use TLS for DNS. But even in encrypted traffic, there are stuff like certificates that are presented when you are connecting. SNEs, yeah, another interesting talk that was here yesterday, I think. There was TLS 1.3 talk. And yeah, SNEs wouldn't be available in new TLS, but you still get some information about the certificate. And yeah, sure, you can get IP address, MAC addresses, how long did the connection take, amount of data, and stuff like that. And you can alert when you see something interesting. So some examples, yeah. Okay, so one more slide about what you can do with it. You can monitor devices you don't trust. For example, if you get a device that is not so friendly as ours, and you don't have a root account on it, and you have a new TV, and you have no idea what the TV is doing on your internet or your washing machine or your fridge, and you want to know what they are doing, you can spy on them and see how much they are connecting to somewhere. If they are using unencrypted traffic, you can see what they are actually doing. My TV is regularly browsing Baidu for some reason. And I haven't discovered why yet. Yeah, you can try to detect this kind of stuff. There is also a large collection of rules available for Suikata already, which are trying to detect various malware and behaviors that malware are doing, typical behavior that some malware is doing. And yeah, you can write your own rules, detect what you care about, and block it. So this is just some example of what you can get from the flow. You get some information when the session started, how long it lasted, how many data was transferred. But let's take a look at the interesting bits. For example, if we get a TLS connection, so basically somebody accessing a server in an encrypted way, you still get information about what server, what company issued the certificate, usually what server it is issued for, and much more data. But everything you can see here is actually what you can match and create alerts on top of and what you can actually use to block stuff. So you can just write some simple rules that say, hey, if there is a facebook.com in certificate, block this connection. And yeah, it wouldn't help you if the device already has established secure connection, but it will actually allow you to block the establishing of the secure connection. So this one, this one. Yeah. For, yeah. One thing that we are using it for, I was, yeah, we have a website that we use. We developed, it's basically demo website of our Router interface. So we can look at it later a little bit. And one thing that we are using Suricata for is basically to give you some overview of your traffic. So in our Router, we are using Suricata to monitor your network if you install it. And then we are collecting some data about where you went, which devices connected where, what protocols, and we are getting, we are matching all those flows with the names of the service that you are accessing. We don't, the interesting part about this is that you don't see just the IP address of the server that you try to access. If we saw DNS query before, we remember it. So we know what hostname your computer was searching for and the IP was the answer to. And if we see SSL handshake and we can deduct the hostname from it, we use that one as it is even more precise information. Why is it important is that you have those web hostings that have tons of websites on one IP. And you might be interested in which one was the computer actually trying to access, whether it was some cooking site, whether it was some how to make a bomb site. So yeah, you need to distinguish between those. And that's what we are doing and we are presenting it in user-friendly way. So since I use this opportunity to show you this site, let's take a look at what else do we have here. This is our web interface. I was already speaking about some settings. Yeah, one important lesson that we learned from all those honeypots, if you remember it. Yeah, if you look at them, most of the passwords and login information are quite simple. Yeah. Oh, yeah, sorry. Let's see. Okay. So now it's visible. Yeah, so, yeah, sorry. Did you saw the stuff that I was showing before or were you just afraid to ask that you don't see it? So yeah, this is the list of people that were trying to attack one of my computers. And yeah, you see even here that one thing that people are trying is Ubuntu. There is also similar default passwords for ubiquitous routers and stuff like that. So one thing that we did in our web interface is that there is a first-run wizard that you have to go through to actually get access to the internet. And the first step of this wizard is setting up your password. So there is no default password and you have to set it up before you connect to internet. Then we have these notifications. Those can be sent to your email address as well. So even though router updates automatically, you get emails when it does and will it install and you still have some information about it. One thing that we are doing a little bit differently is also that we are doing DNSSEC validation by default. Well, it's easier to ask who heard and know what is DNSSEC. Okay, so about the third of audience, that's kind of good. Yeah, DNSSEC is basically a way to actually sign your zone data. And yeah, basically your ISP is usually providing you with DNS resolver, but unless you verify the keys and verify the data that he is sending to, you can try to redirect you somewhere else to some other server even if you try to access something that's commonly used in some cafes and restaurants, basically. You ask for some server and they will redirect you to their portal. So to avoid being redirected to malicious site like a portal, you can use DNSSEC to validate the answers and that's something that we have enabled by default. And because quite some ISPs have a broken DNS servers, we can run full resolver on our router and it will resolve DNS by itself. So you can avoid broken DNS servers of your ISP. And what I wanted to show as well is VPN configuration, how it looks like. I said that it's simple. At the beginning, you just say that you want to generate certification authority and it will do everything. And yeah, it shows some settings, but you can change the IP, but if you don't, it will work and it allows you even to make it so that all the traffic will go through your VPN. So if you are in some countries that you know that they are spying on you, you can use VPN if they wouldn't block everything to actually connect and go through your home network and send all your traffic to your home. And when you want to create some client for your VPN, you just enter the client name. Is it kind of visible? Okay. Yeah. Yeah, usually all these data projectors has lousy resolution, so everything is big. But at this conference, we have a higher resolution beamer. So you just enter the name of your client, click create button, and it will generate the configuration. And then you can get a config that contains all the certificates embedded. So you can just have one file, copy it to your Android phone or your computer, and connect to your VPN. Another interesting part that we did is some measuring of network connectivity. So we have some software that can periodically measure your speeds, so you can have an idea of how good your connection is, and when you are sharing it with multiple people, how much bandwidth do you have at some point in time. So I think that was all that I have prepared. Few pointers at the end. That's our main website. The site that I was clicking through and showing you how it works is called demo.tourist.cz. Honeypot as a service can be found on Hasnick.cz. Tourist Mox, which is the new modular router you can get on Indiegogo. And we are making open source devices, so there is link to our GitLab that contains plenty of projects that have sources for the stuff that we are using in our routers. So thank you for your attention. And now do you have some questions? Yeah, thanks Michael. It's very, very enlightening. So if some of us would like to contribute software and feedback that's understandable, what about the hardware engineers who are interested in taking your designs, extending them, manipulating, improving, or maybe making their own modules for the new architecture? So regarding the hardware, I didn't mention it that much, but we have full schematics online, so you can read up how the hardware is connected. And the layout schematics is good. Layout is better. KeyCAD, something open source. Unfortunately, I think that the layout was done in some proprietary software because our hardware guys had troubles making it in open source one. For some reason, I'm not a hardware guy, so I don't know the details, but they ran into some troubles in there. And we don't currently release a whole PCB layout as it is because we are trying to make those routers and we don't want cheap knockoffs, but we will release it once we stop making them. And regarding the new TWIST MOX, there is some specification, yeah, I'm not sure whether I will find it. Okay, probably I wouldn't find it right now, but somewhere on our website there is a complete specification about the connector. What we are using for connecting various modules is basically PCI connector that can be found anywhere. It's not some special proprietary connector. The tricky part is wiring. We are not using the classical PCI, but we are passing through PCI and SGMI and various other stuff. There is also electricity and some power and stuff like that. But we have some documentation online, what you can find on which pins, and that's probably all you need to develop your own module. So a couple of questions. First of all, obviously when some of us power users are trying to hack on those devices, we can get by with having an SSH interface and so on. What do you think would be necessary or what do you see as the missing bits in order to make the browser comfortably usable for average users on an open-to-surface basis? Do you think it would be possible to take the web interface and just package that for open-to-surface? Well, the tricky part is that the web interface is kind of tight to the... Config files of open-to-wrt? Yeah. We are based... I think I maybe didn't mention it, but we are running on our routers openwrt, Linux distribution, therefore stuff that is available for openwrt is available for our routers as well. And the tricky part is if you want to have a similar user experience on open-to-surface based system, openwrt has some specifics regarding the configuration of the stuff, so you would have to re-implement that part. But the web interface is actually written in Python and nowadays it's not that tight to the actual open-to-wrt. There is some backend and frontend and you just need to... There's some generalization, some abstraction layer in between, so you just need to re-implement the backend and you can use hold of frontend. Or there's a system manager, right? And I think WebJS is dead, right? Yes. Yes? Or do you have any more questions? Yeah, I do. Yes, I do. You talked about this Zurichata framework, did I understand correctly that this is something that you have developed on your own? No, no. That's one of the open-source projects that we use. There's a huge community around it. And we just kind of abuse it to do the simple overview of your network traffic in the future we want to implement even the rules. But it's a generic open-source project and being open-source actually enables us. It's a nice example where being open-source actually enables us to do quite some interesting stuff without too much effort. Without basically writing it from scratch, we can use open-source software and provide our users with XTF functionality. But there's not just one DPI open-source project, but there's also one called Snort. And I don't know a couple others. Do you have any overview like what the differences are and why Zurichata is best? Well, I wouldn't say that I have a huge overview of all of those. And we just picked the Zurichata because we liked it. Yeah, if you like Snort, you can use Snort. And I think we have it in our repositories as well at some point. Not sure about the current state, whether it builds or stuff like that. And not sure whether other stuff from this area builds and how well they run. We just took some extra efforts to integrate Zurichata because we liked it. And we like the community around it. Then third question, the Honeypot as a service. If we install that on an open-source system. Are there any configuration settings necessary? Or like are you, if you're redirecting the traffic from the local system to your servers. Is there any QoS assurance or traffic shaping going on to assure that they cannot DOS the actual router? I don't think there is some QoS. You can probably set it up in your system. And there is some minimal configuration that you have to do basically if you want to track your, well, when you want to track stuff, you have to generate some token and insert the token into the configuration file. And I think that's the most important part. Then yeah, you obviously want to run it on port 22 on SSH port. So you probably need to move your SSH somewhere else, open up the port on the firewall and make sure that you can connect to the different port before yeah, cutting you, cutting yourself out of your box. But yeah, I think I actually wrote a blog post how to do that when I packaged Honeypot as a service for open-source. So it's somewhere on planet open-source. But yeah, the configuration should be pretty easy and pretty simple. So most of the time you just need the token from the website. And if you are happy with the default settings, then you are low done. QoS, you have to implement yourself somehow. Yeah. Two things. I mean, I don't think you mentioned when the tourist mocks starts delivery or when it should be done. Current plan is that most of the base modules should be, I think we promised something like October. So generally end of the year. Some of the new modules like the USB one and pass through modules will take some extra time and I think the scheduled date is something like December because those are the ones that we started to work on later and they are more complex. So it took us some more time to prototype them and yeah, there is still some work ongoing on those. Actually, I came up with a third question, but first and second. I saw you opened or kept the fundraising open a little longer. Are you going to extend that again? We cannot extend it again, but I think that it can be changed to in demand mode or something like that where people will be able to buy additional devices and stuff. But yeah, that's something that our marketing and business people do. I'm just a developer, so I don't see into that stuff that much. There might be some, it might get a little bit more expensive and yeah, it will eventually get to retail. But yeah, if you support us now in the campaign, there is some discount for that. Yeah, it's like a week longer, I think at running. That should end next Thursday. Cool. And then the last one was, like you mentioned, OpenWRT that you guys use and some other projects. I mean, how much do you guys contribute back upstream? We are trying to get to, but we contributed some patches to OpenWRT. Actually we are, yeah, it was so, there was some forks and difficult situations in OpenWRT community. And in between, we accumulated plenty of changes that we are trying to clean up now and basically ditch everything that we forked and didn't have to. And we are trying to polish all the patches and send it upstream. Regarding the mocks, Ubud that you saw is basically almost upstream. I think that Mike Wicks said to me that he had to refactor some patch. So it might not be in upstream yet everything, but there is a huge part that is already upstream and the plan is to have everything upstreamed. We want to upstream the kernel support as well, while we are waiting for some other subsystems also to upstream their patches. But yeah, we will try to push everything upstream as we can. We are trying to. Yeah, we had quite some changes in the old OpenWRT 3 before it settled somehow and it's hard to march, but we are trying to clean it up and slowly push everything. Have you tested much with PF Sense, MonoWall or these class of router software packages? Yes, we don't have BSD, we have Linux. So unfortunately that blocks us from using PF Sense. So we didn't try it, but there was some question on our Indiegogo. Somebody asked us about BSD support. We told them that basically, well, we have limited resources, so we are trying to focus on what we ship by default. But they said that CPU shouldn't be that hard and it should be somehow supported in free BSD. So yeah, somebody might do the port for OpenSUSE. Andra did a lot of work on Omnia. So yeah, if we get somebody like Andra in free BSD, there will be free BSD port. So there are no closed firmware blocks necessary to run this device. In Omnia, we were using ATH 10K Wi-Fi cards, which requires firmware, but all our cards are not soldered in, so you can replace them. And I'm not sure about Armada and ATF and those weird secure V8 stuff. Do you know Andra? For the Armada 8K, on the Machiato bin, there is a free RTOS-based blob that is being used for a Cortex-M processor that's part of the SOG. And for that, I've not yet found the corresponding sources, but in theory, it should be possible to rebuild that. Yeah, well, all the software that we are compiling is open source, but we are using some tools to flash U-boot. I'm not sure how well they are supported by Marvel. That's something that our kernel guys mostly deal with. But kernel is open source, U-boot as well. User space, yeah. Not sure about these really, really low level stuff. So flashing U-boot should be possible via U-boot itself, which requires to have a working U-boot on the device. If you have one, then you're good, if it somehow breaks, then I think there's close tools from Marvel for the 3700. I mean, it would probably be possible to reverse engineer that, but that would take someone with time and desire to do that. Yeah, we would have a desire. I'm not sure whether we will have the time right now. And frankly, I have no idea what the state on those low level stuff. But yeah, it can boot over the serial with those Marvel tools. And we are shipping with, yeah, the Mox has a SPI NOR, and we are shipping with U-boot already. So you have a working U-boot that you can start from. Anything else? No? Then thank you for your attention. And if you have any extra questions, find me somewhere out there or come to our forum or mail us or something. Thanks. Thank you.
At CZ.NIC, we are making open source routers. Those come with automatic updates, plenty of software available in repositories, root ssh account and other nice features. What challenges does it bring? How do we cope with them? Why would you want open source router anyway? What open source project are we building on top? And what actually spinned off out of our router?
10.5446/54534 (DOI)
All right, so I think we're going to get started. So what we're going to talk about today is stacking and namespacing the LSM so we can make it available to containers. We'll talk a little bit about why and what the LSM is even for most people may not know. Okay, so it's Linux security modules. It's a basic infrastructure that the kernel holds and then there's a, and it abstracts the kernel or the security away from the kernel so they don't have to worry about it so much because there's a whole bunch of different wacky ideas in security. And there's a whole bunch of different security modules, SE Linux, SMAC, AppArmor, Tamoyo, IMA, EVM, the loadpins module, YAMA. And there's a whole bunch more that have been proposed that are currently not upstream. Some of these would like to live by themselves, but others actually just want to live with the current LSM, the current modules that are there. So why do we need to stack and namespace it? So containers, well, there is the, we talked about coming modules, the proposed modules, some of them want to stack with the existing LSMs, but containers, they end up using the host LSM. And that means they get, well, they use the host kernel, they get the host LSM, that means they get the host policy as well. So if you're doing like a system container like LXD does, if you boot up your SUSE container under a rel system, SE Linux is going to be enforcing on the host and you get no LSM in your container. Just fails to load, they have to block it. Besides the system container thing we talked about, there's also app containers. So some things are doing sandboxing nowadays and, okay, more than some things, there's lots of things doing sandboxing and they all do it different ways and they use different technologies. Some of them actually want to use the LSM as well to help harden their sandboxes. Snappy is doing this, but for Snappy, when it goes over onto a rel or a fedora system, it's the sandboxing the LSM parts based around App Armor, it's not there, so it can't use it. So let's talk about how is the LSM set up a little bit. So basically in the kernel, we have some hook points. There's gathered throughout the code and they just call into the LSM essentially. In the security blobs, we have these security blobs, so in data structures like the INO, the file, the task, the cred, super block, several others, there's this void star pointer that the LSM module gets to manage, allocate, give itself how much space it needs. And then the LSM when it registers, it sets up a list of functions that are going to get called and these are used by the hooks that are already in the code. Now an LSM doesn't have to register every hook available, it only has to register the ones it uses. And then there's also, the infrastructure provides a few common interfaces, right? So we've got the proc adder interfaces, those are used for things, well, they're used by the various LSM projects, LIBS like APROMAR and SE Linux use them, but also common utilities like PS when you do PS-C, it's actually reading the proc adder interface. There's a security FS available to LSMs, not every LSM uses them, but it's common and shared. SO peersec, so there's, just like there's an SO peer cred call in the, sorry, brain freeze, the networking file extensions, what I, I can't think of it right now. Anyways, you can use SO peersec to get a peer label, security label off of a connection. And then there's sec mark, so like the IP tables has been extended so you can set a sec mark on things that the security module can use. So let's, it shouldn't be too bad, right? If we can stack this so we can get multiple LSMs running, right? Just running over the basics. So minor LSM stacking, right? We needed to start somewhere. And so we picked a case where there was some existing issues, right? Before this landed, before minor stacking landed, there was, Yamaha had landed in the kernel and it was manually stacked. So in the hook points that it needed, instead of actually getting called, it was manually hooked into the code so it was additional hook points. Capabilities were manually stacked by each LSM. So that's what we started to try fixing with stacking. So the minor stacking, what it does is it makes those hook points basically a list, right? It's an H list now, but, and so if a hook point is called, it's going to iterate down every function on the list for each LSM that's registered. So in this example right here, task PR control, SE Linux has a hook function registered and Yama does. Oh, I got SE Linux and SE Linux. That's near. It should be Yama and SE Linux. And then on the, for tasks set nice, just SE Linux has registered that one. And so when you do these cook calls, it's only going to call the functions out. There's only the overhead if you actually stack. If you don't, there's no extra overhead or if you're not using the function. This landed in 4.2 and it cleaned things up quite a bit. So we covered most of what is there already, but it ends up splitting the LSM into two types. There's a major type or a minor type. So the difference is the major types can make use of the security blob on the object pointer or they can make use of the existing interfaces that are shared. If an LSM does either of those, it's considered a major. Otherwise, it's a minor and you can have as many minor LSMs as you want, only one major. So you can stack SE Linux in Yama or smack in Yama, app armor in Yama, but you can't stack SE Linux in app armor. Not that you really want to, at least not for a system policy. So let's try it. So the next goal is to make it so that every LSM can be stacked so we can get rid of this limitation, right? Is it useful? Some people complain, why would I want to stack SE Linux and app armor on my host, you know, my system? And there's a valid complaint there, right? You don't want to have SE Linux policy and app armor policy both running at the same time, confining everything. It would get to be a mess. But that's not necessarily what we're trying to do. We're trying to make this more useful for new LSMs so that they can be more flexible than what they use. So for LSMs not designed to be total system LSMs. And we also want to make it so an LSM that is used for a container type situation or sandboxing applications can be used selectively, right? And so, yeah, it is useful. And it's not easy. So blob management. What do we do there? This is probably the simplest part of it. The LSM takes over. So the infrastructure takes over. It's allocating and deallocating the blobs. LSMs are going to set, the size gets set at registration. So when an LSM registers, it registers the size it needs. This is an optimization so it doesn't have to do, there's not an extra layer of imp pointers and allocations done. So it just, it all put together in one big blob. And each LSM when they register, it gets an index of where they exist in the blob. It's just an optimization. It still basically works the same. Each LSM is just looking at a specific piece of the blob. Sec IDs, unfortunately, these are kind of a pain. They exist in the kernel. They're used in the networking and auditing stack areas. They're basically like the security pointer, except we're going to cram it into a 32-bit integer. This is really inconvenient for stacking, for any generic multiple use, because now this 32-bit integer has to be mapped somehow, right? Worse, we can't divide it up. We can't just say each LSM gets six bits that's registered or 12 bits or whatever, because different LSMs have already divided that security ID up, and the bits in that security ID up. And some LSMs actually expose some of these to user space. And so they can't change those, because that would break the user space interface. Also, even if we'd like to have a security ID be a void pointer, it's not going to happen, because this networking stack is very sensitive to size and cache lines. They won't extend the 32 bits to 64, and so we are stuck with it. So what do we do? So the LSM infrastructure again is going to take over. It has to build a mapping between the two pole that is the registered LSMs, each view. They register, you know, they put in their security ID for each on the hook calls. And then it maps it to and creates a mapping for its own internal sec ID, and then that is used at the system level. And then when it's passed back into the LSM, it unwraps it again and passes just the sec ID for that LSM. It's a pain, but it's what we have to do. So there's extra overhead here, but again, it's only when it's used. If it's not used, if an LSM is not using that part of it, you're not going to get the overhead. And it has lifetime issues. This is something that is a work in progress and needs to be worked on still. So the sec ID is when they were conceived, there is no concept of a sec ID coming in and being tracked and then being freed. It's just an integer, right? And so it's possible that these things, they get stuck on network packets and they exist beyond the structure that sent them, say the socket, right? So if the socket sends a packet, that sec ID gets on the packet and it goes into the packet and it goes into the system. It lives longer than the socket. The socket shuts down. There's no ref counting or lifetime management on these things. So that's still an issue to be resolved. Thankfully, sec IDs actually don't roll over that often. So the mappings aren't too bad. Another problem is shared interfaces, right? We have those shared interfaces we talked about before. These are user-facing interfaces, right? So if you change proc-pid adder, not only does existing code break that expects to use it, but also all these useful utilities like PS-Z, top, that use it, they don't work correctly anymore. And because they're not part of, say, an LSM project, they may be harder to update and get to move to new interface. So peer sec, that's the one that you use on the socket. Sock options call to get the socket peer label. So how do we go about fixing these? Well, we can define new interfaces. That's all well and good. New code and new libraries can use that. But again, the old code's not going to use them. We can virtualize the old interfaces. So the idea being here is that the old interfaces were never designed and never used with multiple LSMs. So what we're going to do is we're going to pretend that there's still just a single LSM. When a task is running that's using these interfaces, it's really only interested in one thing. And so we set up a default for display LSM. And that's what's going to get chosen, the LSM that's chosen to put its information on that interface. And then there's an interface added that can be used to set that. So for App Armor, you could use the A exec call and set that display LSM. And then the application that's called would see App Armor as the LSM in the stack. Well, it wouldn't see a stack at all, but it just see that LSM. There's some other problems. With networking, SecMark again, that's a user space interface. IP tables uses it. Currently it's only SE Linux. It wasn't designed to be shared. There's no way to choose an LSM. It's only a set or read at the moment. No concept of composing. It doesn't handle bridged interfaces properly, which is a problem for containers. It's tied to the network namespace and hence the user namespace, which leads to its own issues. So how do we fix this? We can actually extend SecMark. It was set up originally to accept different LSMs, but not multiple LSMs. So it's easy to add new LSMs to SecMark. It's not easy to stack them. What we need is we need to set up some way to either specify the LSM that is supposed to be used or use the default namespace, kind of set it for the network namespace, just like we were talking about for proc interfaces. Using the LSMs, what you want to do there is internally it actually maps the SecIDs and then back. So what you want to do there is when you map the SecMark to a SecID, you find out which LSM belongs to, either because there's a set value or because, again, the default name, default display namespace. And then you only set that LSM's value, and then you can map it back to the SecID back into a SecMark. It's a mess, but it can be done. There's still a little bit of work around the namespaces that need to be done. Packet labeling, this is a little worse. So by packet labeling, we mean SIPSO, Calypso, XFRM, this takes the packet, puts the security label on it, and carries it out across the network. These actually can leave the machine, right? They can travel out to a different machine. There's no real way to deal with this. There's no mapping you can do on your local host and then expect the foreign host that you're sending the message to actually reverse that mapping, not reasonably anyways. You could create mappings, send them to the foreign host, have it receive them, update its stuff, and then take the packets, but it's not possible. It's just not practical. So the solution there is either you give it the packet labeling to a single LSM if it needs it, tries to register for it, it can get it or it can't, and then it's the only one that gets to use it. Or the impractical solution is every LSM that's using it have to agree on what the label is. Again, I said it's not practical because that's not under the control of the LSM. These LSMs have policy that authors create and load. And so it's not just the LSM itself, but the LSM policy that would have to be in agreement. And everybody gets to change and modify their policy. So really, it's one LSM at a time on those. So how close does this get us? So the current stacking patches get us to this situation. We can boot a system with SE Linux, SMAC, App Armor, other ones all enabled. And they come up. You got to be careful on how you bring up your system doing this because the current state of things, it can break boot because different parts of the boot on different systems expect different LSMs. So say on Fedora, they have some things in their boot checking if SE Linux is enabled. It looks at the stacking and says, hey, SE Linux is enabled because it's not using the shared interface to actually check that. And then it says, OK, SE Linux is enabled. Now I'm going to hit the shared interface and do some things because that's where my libraries are set up. And if it's not set up as the default display LSM on Fedora or RHEL, then things break. System D goes, blah, SE Linux policy failed to load or it failed to enforce something and you don't boot. Same thing on Ubuntu or SUSE if you're running App Armor. App Armor code in the boot sequence is looking for, it says, hey, App Armor is enabled. And then it starts looking at the interfaces and it'll say, hey, this failed to load and that can break your boot. Often they break in different places. Depends what parts of the system you're running. If you're running a network, the network will fail to come up. Same thing with SE Linux. If you're running a GUI, the GUI will fail to come up. Depending on how you break it, it just might die really early. You might get to a recovery console. But you got to be careful with it still. But it can be done. But still, this isn't very useful for containers, right? Because every task on the system, including the container, has all the LSMs that are in the stack, applying policy to them and it's from the host. So the stacking is not enough. It gives us the ability to call into different LSMs, but LSMs need to be namespaced. The problem is, LSMs don't want to be namespaced, at least not in the traditional sense. Imagine you're in SE Linux admin. You'd be really unhappy if all someone had to do is start up a container with a different LSM as its default to bypass SE Linux policy, right? Any LSM would not like that, actually. So by namespaced, we don't necessarily mean that you're getting a completely different confinement. So these LSMs, when we talk about namespaced, they have to have a way to apply a host policy and if you have a namespace, perhaps stack that with the system policy, or the guest policy. Good thing is, LSMs are working on it. This has been slow. It's taken years. What we're at right now is IMA has patches out there to name space its policy. They floated some RFC patches about IMA audit. They're still working on what they're exactly going to do on their interfaces. They're planning on landing things in stages, so it's not going to be ready for a while. SE Linux in October opened up discussions, email on their mailing list about how they're going to handle namespacing. They need to work through several things. They need to remove a whole bunch of global state and fix up some internal structures, stuff like that before they can even get to their namespacing. SMAC has had patches on the list for a few years now and they do have some per-process rule stuff inside them as well. SMAC is waiting a little bit on what to do with namespacing, partly because Casey has actually focused more on stacking than he is on making SMAC namespaced at the moment. And audit, well, it's a work in progress. I'm sure you've seen things about audit IDs or perhaps using PTAGs. I know the audit people would love audit IDs. They're not what we would add an L at the security community would like to see because they're not as flexible. PTAGs actually need stacking and it would help audit as well. We have audit issues right now. SE Linux, App Armor, a couple other LSMs call into the audit layer and we can't tell it where messages should go and stuff, right? The exception is App Armor, which is right now fully namespaced, has virtualized interfaces and it has internal stacking. It does have a few issues around system namespaces and limitations and a few interfaces, actually minor interfaces, haven't been virtualized yet. So remember this, when I say it's not very useful to containers yet, well, that's a half truth. Because App Armor is namespaced, you can use this right now if you apply the stacking patches, you can use this with App Armor to do limited forms of stacking. So you can bring up, say, a Fedora system with an SE Linux and bring up a SUSE container running App Armor or an Ubuntu container. So what happens is the container creates an App Armor policy namespace, it sets the default display LSM to App Armor, it then launches the container. The container sees itself as the guest container whatever sees itself as running under App Armor, it never sees SE Linux, so it doesn't have problems and it just applies itself. The SE Linux policy at the host is getting applied and on Fedora what we have to do is we have to be very careful about what policy we bring up on the host. Easiest is just to leave all the host policy in the unconfined state for App Armor and just leave it so you don't see it. The host pretends App Armor is not there, it's just when you make that switch of the display LSM. Sadly, I do have a demo of this but I haven't fixed it from when I broke it trying to do the reverse where we brought up App Armor and put SE Linux in the container. I just didn't have time to fix it. So with that, we're about 95% of the way there. We still have some problems. There's some agreement between the LSMs, right? Like App Armor, SE Linux, IMA, they all have agreed that they want their namespaces separate from user namespaces. So they want their own namespace but we don't agree on how it's supposed to be done. We have our own ideas. Every LSM is doing things differently. We don't really have a consensus on containers in the kernel which can be a problem from an infrastructure point of view for setting this up. So again, the approach of every LSM doing its own namespacing makes sense. There are issues around X adders having to be namespaced. That doesn't make the file system people happy because bigger X adders mean slower systems. But if you're going to stack things, that's just what you need. And we do need a few more hooks to get us all the way there and a few common interfaces so that containers can set things up properly. So once every LSM is namespaced and we get these extra few interfaces, we should be there where you can do this with every LSM and stack things the way you need. So this is not my work. The driving force behind the stacking has been Casey Schaeffler. He's been working on it and pounding on it for about five years now. The app armor developers have been working hard to make things stacked and namespaced. IMA developers, SMAC developers, the LXD developers. So LXD actually can take advantage today of app armor stacking. We'll talk about that more later this afternoon in another talk. And we have just a minute or two for questions. All right. I don't see any questions. Well, thank you for coming. Thank you.
Containers would like to be able to make use of Linux Security Modules (LSMs), from providing more complete system virtualization to improving container confinement. To date containers access to the LSM has been limited but there has been work to change the situation. This presentation will discuss the current state of LSM stacking and namespacing. The work being done on various security modules to support namespacing, the infrastructure work being done to improve the LSM, an examination of the remaining problems, and provide a demo of a container leveraging LSM stacking so that the host is using a different security module than that of the container.
10.5446/54536 (DOI)
Okay, good morning. Welcome everyone to this talk. Maybe you already noticed that the title is a little bit different of what was announced. So my, the first thing we are going to do is introduce the team that is me with here today. So we have Klaus, who is our product manager for Sussamanagir, and we have Jan as well, who is a full stack developer. And my name is Julio, and I'm the release engineer for Sussamanagir. My first question is, who in this room does know anything about Sussamanagir or spacewalk? Okay, cool. So how many people is using any of those solutions? Don't worry. Okay, so first I would like to talk to you a little bit about the spacewalk, which is a free and open source solution for system management, which was started by Red Hat. You have the website there. It was started around 2018, and it's the base, the upstream code for Red Hat Satellite 5, but for Sussamanagir as well, it's free software, of course. But one of the problems is right now is in maintenance mode, and the future is not so clear. Maybe you are already aware that Red Hat started with Red Hat Satellite 6, which is not related in any way to spacewalk. So people is not sure of what is going to happen about it. Most probably it will be close as soon as Red Hat Satellite 5 is out in two years. So we have done Sussamanagir as well, which is the Sussamancer to Red Hat Satellite. It's an opinionated branch of spacewalk with a lot of new features that you will not see there. We have simple installation, but most important, we have configuration management with salt, because Sussamanagir includes a salt master that you can use for all of your instances. We have as well integration with containers and Kubernetes. If you come to the workshop tomorrow, you will see a lot of this. And finally, another important feature is that the web interface is based on React, so you don't have this old and clunky Java stuff. So quick show of hands. Who have known, knows about salt from Saltstack? Okay, almost the same. Who of you does anything with containers or Kubernetes? Only a few. So you should attend another talk about containers. Kussef, okay, thank you. Okay, so as discussed, there is a relationship between Sussamanagir and spacewalk, because spacewalk is in fact the app for Sussamanagir. Right now, we are going to release really soon Sussamanagir 3.2, which is based on the latest spacewalk 2.8. But Sussamanagir, of course, as a spacewalk, is open source. The only thing is that right now the development is closed. So you can have this source, but it's not easy contributing to it. Anyway, Sussamanagir is and it will be still contributing to spacewalk. Here you can see, for example, one screenshot with some of our developers' commitment to the spacewalk project. As you can see, there is something interesting. It's that the trend was of heavy contribution until more or less 2014. And it is slowed down after that. The reason is one of the other problems we have right now, and it's the launch of Red Hat Satellite 6. So let's say that Red Hat doesn't have so much interest anymore on spacewalk. So those are the problems we have right now. The spacewalk team is not as big as it used to be after Red Hat Satellite 6. So there are no reviews or integration of the patches that we are sending. Some of them yes, but most of them no, because there is no time for them and not enough people to have a look. And the more time it passes, the more difference we have on our code right now. Also, we believe that spacewalk has no vision or future for the project. You can see this quote from the spacewalk FAQ. So Red Hat contributions will decrease over time because the focus shifted to maintenance and stabilization of the current set of futures. So no more new features for spacewalk. Also, of course, the community is concerned about what's going to be in the future. So this quote is, for example, from the user-made English people is worried that slowly Red Hat will allow spacewalk to die. So quite interesting to see that there is a very active spacewalk community. Why? Because the jump from spacewalk to the Satellite 6 codebase, Catello Forman and so on, is pretty big. So if you are used to spacework and the ease of use of spacework, jumping over to Catello Forman is a huge step, especially for enterprises. They need to train all their people. So if you follow the spacewalk, mailing lists, it's pretty active. Many people are on there. But yeah, they don't know about the future. So as you can see, this is one of the problems we have with the pull request. This one was created on 2015. And if you go to the spacewalk project, you will see that this is still not merged. Or this other one. It's exactly the same case. And those are just two examples. There are many more. So the fact is that spacewalk asked for help. And until April this year, there was a request for people to help and take over the project, maybe not right now, but in the near future. To be fair here, Red Hat changed the FAQ in April. So the quote you see here is what we took as a base when we approached Red Hat about, yeah, you are asked for someone to take over. Here, we are this someone and we would like to help and we would like to take over. Meanwhile, they changed the FAQ. So this quote you won't find online, but you will find it in the good history. So our answer to this is please welcome Uyuni, which is the name that you see now on the title of the conference and in our t-shirts. We wanted in the end to take over the spacewalk project. We set up this open source conference as a deadline, but in the end, after discussions with Red Hat, this was not possible anymore. So this is the new name of the project. There is no need for you to take a picture of all the URLs because you have the QR code in case you want to take a picture with your phone. Anyway, the website is already online. And why are you Uyuni? Well, sadly, the image is not complete here. But okay, Uyuni is the biggest salt flat in the world, which is in Bolivia. And well, the joke here, of course, is that we are using salt a lot inside our project. So the name is our tip of head to SaltStack for their awesome salt tool. So one of the questions about the future is how is going to be the relationship with spacewalk? Well, this split is friendly, but we will have separate communities. Spacewalk is not going to be any longer the option for Uyuni and neither SUSE manager, but of course, there will be code moving from spacewalk to Uyuni and from Uyuni to spacewalk as needed. This means that we are not going to break the compatibility on purpose, but unlike right now, we will not prevent further improvements because of that reason. So our vision for the future of Uyuni is that for this summer, we want to have fully open development. This means that we will have a public GitHub repository, an OBS project, and I guess that everyone here is aware of OBS. We will have public continuous integration, open mailing lists, maybe an IRC channel as well, and of course, the most important first release based right now on OpenSUSE-LIP42.3. Then after that, but we still don't have a strict timeline, we will work on the next release which will be based on the new OpenSUSE version, LIP15, and we want to release the model, to define the releasing model together with the community. So we still don't know if we are going to have rolling releases or releases each month, each six months. This is something we want to discuss with all the users that we hope will come from the space world community and from the OpenSUSE community, of course, and even outside. So with LIP15 out since yesterday, you might be a bit disappointed that Uyuni will be based on LIP42, but the problem here is Java and Python, which are the main languages used in this project, and especially the switch to Python 3 is much more work than we anticipated. We are also looking for help here. We have many code, much of the code already ported, but going to LIP15, also with a new Java, especially looking at Oracle's Java release model, where they, I think, they released Java 10 and gave it a lifetime of six months. It's a bit hard for us. We also, the main team currently working on this is working for Sousa Manager, and we will have Sousa Manager release next month. So the team is currently busy on this release, but we will then find time to work upstream for Uyuni. Open all the source code on GitHub and start the work on LIP15 with, hopefully, an initial LIP15 based release sometime in autumn. Yeah, one of the problems we have here as well, and we need to work that out, is that the Python change is not only for the source code we created or we changed from the space world, but we have some external dependencies like a cover for the bare metal machines. And, well, for this, for example, we will need to take decisions with the community if we are going to adapt it to Python 3 or maybe just consider some different solution to handle that. So as I said before, Uyuni will be the upstream for Sousa Manager in the same way that we have open Sousa for Sousa. The futures will come from the Uyuni team and, of course, from the community, so we are ready to accept all your ideas, all the ideas from the other people and, yes, some examples. We don't think that Uyuni needs to be restricted only to open Sousa. Right now, we have support or Uyuni is able to work not only with Sousa and open Sousa, but with Santos and with Red Hat Enterprise Linux as well, but we would like to add support, of course, for Debian, Ubuntu, or any other distributions that the community can consider interesting. And not only Linux, well, maybe if somebody wants to add support to VSD or to Microsoft Windows, that should be possible as well. Translations or any idea that the community can have about the future. And if anyone thinks that Windows, that sounds strange. Well, we have code ready to manage Windows. It is possible through the Windows management instrumentation and it maps quite well to Uyuni. So how can you be part of our community? At this moment, you can follow the Twitter, by the way, you will find the correct Twitter account at the website, because I didn't change it here. You can already sign up at our mailing list. Right now, it's only for announcements, but this will change really, really soon. And, of course, you can spread the word about this new project. And for the new future, and that means in the following weeks, some of these things, and during the summer, others, you will be able to report problems and wishes. You will be able to report it initially via the mailing lists or maybe IRC. And during the summer, of course, via GitHub issues, most important, you will be able to fork the GitHub project and send us your pull requests. And I don't want to scare anyone away at this point, but this project is not a simple configure and make. It's much more complex. So we welcome contributions, but you should really start with one of the pre-packaged versions or even the vagrant images we will distribute and start from there. But it's complex, it's a beast. Lots of Java code, Tomcat and so on. But I would say that anyway, you don't need to rewrite the whole Unicode base. You can start with small fixes, learning all the stuff that we have inside. Yeah. Well, in this case, it's especially important to have good communication with us with the team. We are always happy to give you some pointers, help you with some stuff to begin with the code base. And yeah, it's like always, maybe it's the best idea to pick up some small, tiny bugs to get used to the code base. And yeah, we're always there to give you a hand. So the interesting thing is that tomorrow at 1 p.m., we are going to have a workshop at the room 305. So it will be a practical demo of Uuni. And this means we are not going to show it on the screen. You will install it on your laptops using vagrant images. We will practice installation. You will use it to manage open-source instances. You will learn how to build, publish, and manage Docker images as well. And you will take this installation with you so you can play at home, at the office, or wherever you want. So remember, tomorrow at 1 p.m., at room 305, and there is something important as well. Those images are pretty big, around 50 gigabytes. So it will take a lot of time downloading them on the Wi-Fi here. So we are going to stay upstairs at the room 322. And you can come there today or tomorrow before the workshop to download the images. You can just plug in your network and in 10 to 15 minutes, you will have everything you need to run the workshop tomorrow. So at this point, I guess that you should have some questions that we are happy to answer about this, about the relationship with the spacewalk, the future, how you may contribute, or whatever you can think about. So we are ready for questions. Too much for everyone, yeah. You can ask what is sold or whatever you want, but we will give short answers. That's correct. Yeah. Will it work with OpenSUSE, Fedora, CentOS, REL, Oracle Enterprise, Linux, and so on? Will it work with both Leap and Tumbleweed? Yes. There is one catch. Julia already mentioned it. The image you will download for the workshop tomorrow is 50 gigabytes. Why? Because it contains all the distribution packages. So with Tumbleweed, you will have the problem that you will have a lot of downloads to keep the packages in your Uni instance up to date, because this is the core of this project. It knows about all the packages and it knows about all the packages which are on your clients and can do version comparison and can tell you, hey, on this client, you need to update your kernel or whatever. So this is the main problem with Tumbleweed. And of course, most of your clients will be always out of date. There is already code inside for Debian Ubuntu, but it's not heavily tested. And also on the space for community, there are questions once in a while. Some people get it to work, others don't. So this is currently uncertain. But the basic operations are not tied to any specific operating system. So we know that spacework in the past worked with Solaris, for example. But the code is unmaintained and the state is unknown. And in case you don't intend to install Uni yourself, then if you're working in a company or at a university, talk to your assistant admin and spread the word. Any other questions? Then again, we will welcome you for the workshop tomorrow. And remember to come up as stars to get your images. And we will be around. You can recognize us because we have these T-shirts when we are the only ones yet. Thank you. Thank you very much.
Learn an easy way of keeping your systems configured and up-to-date via opensource tooling, even for huge infrastructures.
10.5446/54542 (DOI)
Welcome everyone to my talk. It's always see what's new and best practices. Not to get you confused. It's not about the open suzer conference. It's about a command line tool for the open build service. Not me, a few words. I'm a service engineer at suzer. I'm a maintainer since this year. I'm contributing to OSCE since December 2016. I'm a father, a navigator, a time management specialist. I can do. About the agenda and what will happen here in this room is that I will tell you about the news of OSCE, which is quite difficult because I have never given this talk before. So I just picked one date. And it's the beginning of 2017 and started at this point for my what's news. After that, we will see a best practice. It's really low level if you know OSCE in detail. This could be a little bit boring. Just grab a drink or relax. After that, I will tell a little bit about the Python 3 status. We are trying to port the OSCE to Python 3. And I will tell you how far we are and what needs to be done and how you can help us with that. The next thing is plugins. There is a special thing about that, but we will see. And then I will talk a little bit about the future. I just realized that I got a 45-minute spot. That's not what I wanted. I just clicked wrong. So this will not last 45 minutes. It will, perhaps we will hit the 30 minutes. But yeah, we will see. So what's new? Big news, we support IPv6. Then we improve the build recipe, I don't know if you are familiar with that. If you just issue an OSCE build. The script does not know what you want to build if you have more than just one build recipe and just one more repository or architecture, so we build a neat algorithm with tries to detect what you want to build. It's not perfect, but it's a start. Then we have a new OSCE blame command. It's like git blame. So it's just issue OSCE blame. And you see who has done changes to the files. Then we have generic build options which can be passed to the build script. We have Docker builds now. There are a few workshops of Docker builds, I think, done this conference. We now show the build duration in the build history. That's not big stuff, but a lot of small stuff. The config file of OSCE is now in the right place. If you have noticed, if you install a new OSCE, the config file is not in the home directory anymore. In the plain home directory, it's now under the config file folder. We have an OSCE RPM lint lock. So we can just look at the RPM lint lock from the OSCE command line client. We try to improve the change root command of OSCE. So we now mount file system which are important, like dev or PDS and such. We have a SHA256 checksum verification which makes it even harder to sneak something into the build service that you don't want to have there. And we dependencies are now supported in the local builds. We can even send sys requirements to the running builds. We are now able to div meta files. We had some problems with SSL when the complete client got stuck and nothing was done anymore. This is also fixed by now. It was actually fixed like one week ago. We also have interactive request mode. It's when you are a reviewer and you have to deal with a lot of requests, you can go through them in the interactive request mode. Does anyone know what a request is? Okay, cool. We have one that knows it. I will just explain it. If you have a new package and you want to get it into tumbleweed or some devil project, you start in your home directory and then you create a submit request into the specified project. And with the interactive review mode, a reviewer can issue the command and get all requests that are important for him and can go through the list one after another and can look at the build status, can look at the lint log, can look at the divs and all that is there. So it's just an easy way to have requests done. And I listed the releases since 2017. There are a few that are just for all of you what have been done since then. The best practice is I will just show you how you can start a project and a package on your own and bring it into the build service with the OSE command line client. I don't have anything against the web interface. The web interface is very nice and it's really, really good. But sometimes I just want to do it on the command line for reasons. And for this, I created an example source for the package. And because I'm a very good developer, I just used a simple C hello world program. Which you can find under this GitHub link. It's a very simple program. And I created a project for this package. It's basically my home project. It's home M-streakle. You can look at it and see what I have done and see what the result of this all is. So there are basically two ways you can use to create a new package. You can issue the OSE meter minus E, which stands for edit. And say, okay, I want to edit the meter data of a package. And you just add the package name. And if this package is not there yet, it gets created. As soon as you enter the basic information like the title and the description, those can both be empty, but the text must be there. So you cannot just leave it blank. The title and the description text must be there, but they can be empty. So after this, this gets transmitted to the server. The server creates the package directory and everything is fine. The next way is to use OSE MK pack, hello OSE, which does basically the same, but on the client side at first. So the big difference between these two is that if you issue the one with the meter, you have to check out the package after that. So you don't have a working copy automatically generated on your system. That's the big difference between these two. And then to get a starting point, you need a spec file. So I just created a simple spec file with all the information that is needed, like, okay, the name, the version, the release, the license and all what RPM needs to build a nice package. I have a prep section, a description and everything that is needed. As I said, it's a very basic example. So how do I get the sources that I want to build? One way is to use service files. I choose this option because I find it pretty neat to get it out of GitHub automatically and to deal with tar balls and that thing and to download the tar ball and upload it to the build service. So I just used a service file. For the service file, this is the basic anatomy of a service file. You have the service tag. Then you have the service name you have. You can issue. There are a lot of services. At the end of the presentation, I have links to all relevant documentation of services and all that I will present here. So I will not go to every detail and explain what this line does and this line does that. I will just give you a basic concept. So the service name OBSSCM will get the file from GitHub and then there will be some magic at the end. You have something that can be processed. So it gets source downloaded as an OBS CPIO archive. That's just a format we use for downloading the sources. And then it gets archived as a tar, compressed as a Gzip and then the modified spec file is written with the version number replaced in the right place. So now we have everything. So this is just a local run. So if it's OC service run, you can test it. It will check out locally, but it has nothing to do with the actual build. It's just to see that the service file is correct and that it will download the right sources and that everything is fine. Now we have everything we need. We have the service file. We have the spec file and now we can check with OC repos which repositories are valid for my package or my project in which I can build. Then I can do an OC build with the repository and the architecture and the recipe. The recipe is the spec file. In our case, that would be OC build opens with a factory because we want to build for the newest shiny system. The architecture is x8664 and the spec file. And then it starts a local build and you can see, okay, if it builds, it's everything fine. If not, you can just look at what happens, what is wrong and what needs to be done. A few words about meta information. Meta information I use to control the build and the package and to store information about the build, the package or the build environment or the project. Everything can be stored in the meta information. If you do OC repos, I get for my home project, I have all those repositories and architectures I can build against. If I take my little greeter, it will be built for all those repositories and architectures. But I don't want to. I just want to build for a few of them. So the way to go is to disable some builds on the package level. To do this, you can edit the package meta information and say, okay, I have all these repositories. I just disable the complete architect. I don't want to build for the ARM 7L architecture. I don't want to build for the architecture. I don't want to build for 586. And I don't want to build for anything other than open SUSE. So I do this with the OC meta for edit, package, hello OC. And then I type this and magically some magic happens and in the end I just got the repositories I want to build for which is basically open SUSE with the x8664 architecture. So this is what happens then. This is just a small overview about what conflicts are there. As the project conf which defines the build environment, you can say, okay, I don't want to use a specific package for my build. I want another version of the package which is not defined in the repository. So I can use the project config to handle how the build environment is built. And even add additional repositories or packages here. And there is a very good description in our wiki which I put here in the presentation. So if you are interested in meta information, you can just look at this link. Then the package we already seen that defines the package. And the basic in the most simple way it's just the package name, the package title and the package description. The same we have for the project, like a project name, title and a project definition. Yeah, description. But there are a lot of good sources where you can look up this and you can look this. So now we have everything we need. We have a local build that is okay. It builds for all repositories we want. We have the services file. Everything is fine at this point. There is something that is called a change log. Some people like to document their changes. Some people don't like to document their changes. But in most cases it's a better way to go that you document what you have done and when you have done it and why you have done it. Because sometimes let's say you get ill. If I trip over the steps there and break my leg and I can go to work and somebody has to understand why I did some changes to a package, it's pretty hard. So there is a change log where you can describe what you have done. OSC even provides a way where you can create a change log pretty simple and complete automatic. It's just OSC VC and an editor opens and you can type whatever you want. It even provides the date, the time and your name and your email. So everything will be fine. Just type what you have done and write it, save it and you have a changes file. To check what, so this is just locally in your local working copy. Nothing here happens actually on the server at this point. You see it if you issue an OSC status, you see that they are all marked with the question mark which means that I'm not under a version control yet. So you have to add them. If you want to add them all, you can say OSC at asterisk and all new files are added automatically. And after that, they get the status A. And there are just a few statuses here like added which basically means it's a new file I want to upload. Conflict is when there are changes upstream made and you need to change and you have changes in the same file and then you do an update of your working copy, then you can get into the conflicted state. The deleted state is also you have to delete it locally. And yeah, I think the statuses are all self-explanatory. So is this, at this point, we are even not on the server yet. So we are just locally. Even now we are not on the server. This is the command which does the last step and says, okay, put it all onto the build service and let's get it out. It's the OSC CI. It contains the diff, what you have done. It contains the files, what you want to upload or to delete. And the description, the command you have there is you can choose whatever you want. And after that, this one gets uploaded to the build service and the build service, if everything is right and everything is configured properly in your project, the package will start building on the build service. Yeah, of course, you have to save it before everything happens. And then it gets transmitted. It gets the revision number one if it's the first upload and we are done at this point and the file is uploaded and everything is fine. Just to give you an example of what the build service can do and how you can control it with OSC, I've chosen just one random feature which is constraints. Tridents are a way to control the build server and say, okay, I need more build power, more power. And it can be done on many different ways. You can, I've chosen here to say, okay, I need 500 gigabytes of space to build my hello world program. So I get really, really good power. This will of course not work because we don't have any workers which this amount of this space is just ridiculous. So this will fail. It will not build. So we can check our constraints with OSC check constraints and the repository and the architecture. We can check, okay, how many workers will be able to build this or am I using here constraints that are not even realistic anymore? So I can check, okay, my constraints are okay, which in the 500 gigabyte case will of course not. And if I issue it without any repository or architecture, it will print me for all repositories or architectures the number of workers that are able to build my package. But in this case, I lowered it a little bit to, I think, 50 gigabytes. So we have 25 workers with 50 gigabytes. So yeah. And the build service doesn't let you in the dark when you build the package and it finds no worker which can build it. You get the message no compliant workers and it even gives you a hint which of your constraints are too scary for the system or too big or whatever. So it says here, okay, you wanted to have 500 gigabytes of hard disk? No. There is a link where you can learn about constraints. It's a good tool to control the build service and the workers you want to build on, especially if you have a more complex package than just a hello world. This is what I told in the news. We have an OEC blame command. This is how it looks actually. I played a little bit around and was wondering why my build took so long. So I just issued an OEC blame and said, oh, there's a sleep 600. And now I can see, okay, someone named Leslie L. That's, I don't know who this guy is. Put the sleep 600 in and now I can go to him and say, hey, put it out. And the OEC has a few ways to investigate a package. It's you have an OEC info which provides you the basic information about your working copy. It's basically just what revision you have, what directory you are in the link in the OBS that you can look at it in the web UI. Then you have the OEC log command which gives you for every check in, you see who checked in, what time, then the source empty five and with what message it was checked in. And we have even more, we have a build history where you, we have two things here. We have a job history and a build history. That's a little bit confusing for people using OEC the first time because build history is, in the build history, they are just succeeded builds. If you want to see builds that are also failed or for some reason got interrupted or whatever, you have to use job history because you won't see failed builds in the build history. And here you see the time, the source empty five, the version number, the revision number, the duration, how long it took. And here you see my build with the sleep 600, it took 700 and eight seconds to build. And in the job history, you see also the failed ones. And there you see, okay, the build started at 40, 1427, failed. And the one who is telling this is the dispatcher. The dispatcher is the one that gives it to the workers. And so you can, it's possible, I hint that there is something wrong with your constraints, which was in this case, the truth. Then you have the OEC results. The results show you what the build service is doing in this case for open-suselip 42.3, it's still building and it even tells you if you issue it with a verbosity switch on which worker it is building, then you see which repositories are disabled. And if it's done building on a server and you want just to get the binaries to test it or to look into it, you can issue the OEC get binaries command and you get all the binaries, yeah, the RPM if you build for open-suselip, of course. Yeah, but you get a lot more than just the binaries. You get also the build environment definition, so you can look into the exactly build environment which was used for this build on the server. So you see, you can hear, debug some things. And you get a statistics file which contains how long it builds, how much load was created and such things. And you get the RPM lint lock. So that's just a small part of what is possible with the OEC. The OEC has in the moment 89 commands which you can issue, so it's pretty big. So to show every command here, it's just, it's too much. The next topic is taming the chaos. It's like, yeah, OEC is Python 2 and Python 2 will die sometimes, perhaps, we will see. And we are trying to be modern and to be ready, so we are porting to Python 3 at the moment. The challenges here were that it was written for Python 2.4 and it needs to stay compatible with every Python version, even 2.4. So I cannot just say, okay, we'll just use Python 3 everywhere and the other Python version doesn't interest me anymore. I have to be compatible with every Python version here. And the other problem is that we have a lot of tools and plugins that use the OEC core libraries and I don't want the plugin developers to, I don't want to cause more pain than needed. So I have to be sure that the behavior of the function is the same than before. One another big problem was that there is no Python 3 version of the URL grabber, which is a module which handles mirror downloading. You give the URL grabber a list of URLs and it tries every URL and the first hit wins. But this one doesn't exist in Python 3, so I had to implement my own module on this one. The URL open.read now returns binary data instead of just text data and we use this to get the information from the server. So all functions or processes that run after this needs to be adapted. And a lot of lot and lot and lot and lot and lot more pitfalls on the way. So the progress is at the moment we have a running Python 3 OSC branch. You see it here. The Python 3 package can be installed besides your existing OSC installation, so there should be no problems with interfering config files or something. And there is documentation to be done of all this. So what you can do to help me here is use the Python 3 branch and report bugs in GitHub with the prefix Python 3 or email or whatever. I just need more people to test it because we have so many commands. I said 89 commands with even more combination of switches and parameters and all that stuff. So yeah, testing is good. So please test. Plugins. Let's here be plugins. The problem is that I finished this presentation yesterday and I didn't have the time to finish this topic. So shame on me, but I don't have anything to say about plugins other than that there are plugins. Yeah. So I'm very sorry. So let's talk a little bit about the future of the OSC, what is planned. One big news. I'm planning a rewrite in Perl. No, of course not. My main task is to get it Python 3 compatible. That's my main focus here. And the password handling is enhanced at the moment. Will be enhanced. There are two people working on this at the moment. The passwords are stored in clear text. So this is not very good. They are in config files, which shouldn't be readable and everything, but storing clear text passwords should not be done. So this is done at the moment. Then there will be a lot of more improvements regarding the interactive mode I mentioned. And one big point is to preserve the local build environments when you use Docker or KVM builds, because at the moment you can just switch to old build environments when you're using a change root build, but not if you're using virtual machines or Docker builds. And of course, help and documentation needs improvement, of course. That's always needs improvement. And one thing that isn't on this slide, I just was made aware of this yesterday, is that testing always sucks. So we have unit tests with which test the basic core lib functions. And if I would blindly rely on these unit tests, my Python 3 port would be done around three months ago, because I switched everything with the Python 2 to 3 command. And the test suite was running in Python 3, and I was like, yeah, I'm the biggest Python developer in the world. But then I started to using it, and it was, okay, perhaps not. So the testing needs improvement. And if you have any idea how to do testing, please talk to me. I have a basic idea how to do testing, but not in this complete framework. The end. The end is the end of my presentation. So thanks for your attention. I'm not ready yet. If you have questions, just email me. I'm on FreeNode in the open service channel. Or just getting contact over the GitHub project. And as always, patches are always welcome. Contributions are always welcome. We're a great community. And thanks for your attention. Now you can clap. Yeah, and of course, all credits for the slides go to Richard Brown. So if you have any questions, I will be, yes. Contact, of course. This one? Okay. Cool. If you have any questions, I will be around here the next days, of course. Mostly you will find me behind one of those video tables. And I cannot just go away from there. So if you want to talk to me or want to discuss something, just leave me a note. And I will get back to you. Okay? So, yeah. Have fun at the rest of the conference. And enjoy yourself.
Things you may have missed Many of us use osc on a daily base. This talk will be about new features in osc, plugins and best practices. At the end I will provide a small outlook what new features are planned.
10.5446/54545 (DOI)
Hello, everybody. Welcome again. My name is Panos Uriadis. I'm QA maintenance engineer for SUSE. Today I'm going to talk to you about a personal project of mine that I did during Hack Week that I received a lot of good feedback and interest. I'm here setting it with you. The project is called the catastrophes, which is a Greek name. Most people couldn't even pronounce it. The reason I picked the Greek name is because, why not? I mean, since I'm in the containers world and testing, everything looks to be Greek there. Even Kubernetes is a Greek word. So I was like, I will use a Greek word. So what the catastrophes means for the people who are interested in that, because Marcus asked me before. Well, if we split the word, it's actually N, which is in, Kata, which is two, and Istimid, which is stallum, which is Latin, means medium. So it moves something into a location or medium, which is install. If you just look at that, and then here is the installation that everybody knows. So a catastrophes means installation. So my project is about installing something in that case. What it is? What it is? Yeah, I just came up with that. It's an open source project. You can find it in GitHub. And what it does, it's supposed to test packages inside the open source containers that we see. Okay. And how we do that? I use Docker and SystemDnSpawn. So I use both a process container and a system container. And I use then the Elk stack and the file bit. The whole project is in GitHub. And in case you like this, feel free to contribute to that and let's see if there is more interest about it. So how it started? About one year ago or even longer, it was HACWIC. And I was checking containers. I was like, what is Docker? What it is? I was learning stuff. So basically, I said, okay, I am an open source guy, so let's use the open source containers. So I go to GitHub, I pull the Docker image we have for Tumblr. And I see that we have orphan packages there. Like if you do Zipperserts in the package, this package was coming from nowhere. And then I filed a bug. We talked with Martin, we talked with Flavio, and we make it fixed. But it was a bug. Next, we had wrong repositories. After that fix, three months later, I downloaded again the new image for open source container trying to do Zip and refresh and Zip and refresh fails because the URL in the repository was wrong. And I was like, what is going on? We have packages coming from nowhere. We have broken repositories. How people and why I am the only one that finds those stuff? Like there was no bug report. So after those, I started looking how I can test it. So a lot of people how I can test containers in OpenQA. And the feedback I got is that basically you have to install OpenQA first, and then you have to spawn a virtual machine inside this virtual machine to run in a container. Which looks like too much. Like it's way more things to be done in that case. And it's really, really slow compared to the typical testing of containers. For those who are interested, since we are in the university here, there is a Google summer of code project for that. That people and students are interested to help us and contribute to OpenQA by investigating a docker or a container backend in that case. So feel free to check this. And so I talked with the guys that we do testing. I saw also this and then I realized that, you know, we actually push open-source images one year ago with no testing them. So I was like, let's try to see how I can test them. That was the motivation behind the whole thing. So the project tries to answer how many packages might be problematic without my knowledge. I mean, I was just lucky when I did the Zipper search on that specific package, which was Kerberos 5 mini in that case. But can this be that there are also other packages that are broken that I'm not aware of? And if so, how do we know unless I test them all in that case? How many packages are supported in our open-source container images? Are the same packages that we support in Tumbleweed, all of them supported also in open-source containers? I mean, if I open a bug, would be a legitimate bug or they would say, no, it's containers. We don't really support it there. And how big undertaking is going to be this, to test everything? How much time I'm going to take for that? Since it's a community-driven project, it's just for fun, learning, experimenting. So I would like not to spend that much of time. How much resources I'm going to need for that? How difficult is going to be? And how accurate the results are going to be? That was some first questions that I had. Okay? So expectations. The expectations I had is like, this thing has to be fast. What fast means? Fast, definition of fast for me, is in that case that I'm going to test the whole ecosystem of open-source packages. This means roughly 50,000 packages. I need to test individually 50,000 packages. So in that case, I would say a week would be the maximum for me. I wouldn't wait more than a week to see a result, five days or something like that. Trust. Trust means if I find something that is broken in the container, I expect that the same behavior happens in your laptop. If this is not true, my whole project is... No, it's working. Okay, perfect. Okay. Yes, I was saying that the whole project makes sense only if what I find is real and it's a bug officially and we have to fix it. Otherwise, if I see different behavior in a container that I've done in a laptop, on a desktop, it doesn't make sense to do it. And it has to be simple. Simple meaning anybody that wants to contribute, anybody who's not familiar with containers but can jump into it really fast without much knowledge. Requirements for the test environment in that case. I had to build an environment in order to test that. So infrastructure as a code. I needed something that no matter if I'm sitting in my laptop in my workstation at work, if I visit my friend that has another distribution, I would like not to spend time installing and configuring from scratch and messing up his system. So everything, the whole project is containerized. You just get cloned, run one command, you don't infect your system with any of the things I'm doing. It should be low overhead, meaning like it should be able to scale. If I run it in my laptop, which is a dual core with hyperthreading and 4 gigs of RAM, I should be able to run the whole project in my laptop. And of course, if I move the project into a different infrastructure which is bigger, 20 cores, more RAM, the project should be able to take advantage of this and be faster. Right? And then I needed isolation for exactly what I told you before. Like, I don't want to mess my host system when I'm doing testing in that case. So for isolation, what we have, we have virtual machines that they offer great isolation. But the tooling for making this automatically was a little bit not the easiest, let's say. As I said before, I would like to have things really easy for that project, like one command things. And also with the virtual machines, I realized that the project is not going to be lightweight. Because if I need to have one package per virtual machine, this means I need to spawn 50,000 virtual machines. This means I will end up in 2030 in that case. T-route, yeah, quite the opposite of what the virtual machine is, terrible isolation, terrible. So forget this. And I ended up with container with something in between. They offer, okay, isolation, the container can see, the kernel of your system can access the hardware resources, but at the same time is not going to mess my host or packages in that case, only touch the file system of the container. So I ended up with containers also because it was time for me that I was experimenting. The methodology now here. Test, student, influence each other. What it means. Back in the day, I was a journalist writing Linux articles and tutorials and guides. And one thing that I really learned is that if you write a guide for somebody to follow, you have to have a clean system always. If you are going to provide information on write documentation for somebody, you should make sure that you download the ISO, you boot a virtual machine, you do what you have to do, and this thing should be reproducible. So in that case, let's say that we have a test that I have to test packets A. So we spawn a virtual machine in that case, I install packets A. Then if I continue in this system and install packets B, it might be that packets B has a conflict with packets A, but this is not clean anymore because packets A is installed. It should be a virtual machine, packets A. Another virtual machine, packets B. Something like that. So one test, the test for testing packets A should not interfere with test of packets B. That is the scenario. Test should be ephemeral, meaning as soon as they run, they get automatically deleted. They don't take space in my machine, they don't use my resources anymore. I don't want to manually take care of removing containers, their images and other logs of Docker in that case. Test should not affect the host as they are running, what we said before. So this is a screenshot. Yeah, not the best one. I'm using Timux here, but I can give you a demo later on when I saw you better. What I'm doing here, basically, you see every line you see here, every of these green lines that has success, which means open suzerox in that case, all the packages in that screen is installable, is one container. And this in that project, here you see, for example, how much time remains until the test is complete. Here you see file bit harvesting for logs and sending them to Elkstack. And here I have wrote a really, really high-key parser, since I'm not aware of any parser for zipper logs. Maybe if you are, please let me know. So I just look for errors and I parse for specific strings in the logs to see if they are happening. So the last time I ran that was in January. And we find, I found out that basically 98% of our 64-bit packages we have in open suzer is super healthy and only the other percent is not installable. The good thing with that project is that even if a package can be installed, sometimes we have messages happening in the post installation scripts. And you can aggregate those and see exactly how many packages are the same messages. And basically, when we release an open suzer snapshot, if we run this after that, we maybe can say, okay, if we fix this specific bug, 10 or 50 packages is going to be also fixed. So let me continue. Use cases. Visualization and metrics for the installation of our packages. You can either use the CLI and see the results on the command line, but you can also use elastic search and Kibana and go to your release manager and give him graphs and bars and I'm sure he will like it. Then I was like, what if open suzer had serverless infrastructure that a user before go to Facebook, Reddit or open to the forums and start complaining that package A is not working in his machine because he has installed a bunch of custom personal repositories from other people, he can go something like broken forever or it's just me in that case. So he can just, for the people that were in the previous presentation that I did, he can just send a request with a package and he can see if this package is installable or not and if it is in his system, it's not, then there's something wrong in his system, like he shouldn't file a bug for that in that case. Integration in build service, I'm not sure about that, I'm not very familiar in build service, actually I'm not at all familiar with that. I assume that there might be a use case for that as soon as we build the package, we can just run this outside of build service in another infrastructure in order not to cause any overhead in the build service in that case and have the output in a database. So this can be run also during building in that case. And this is a good entry point for newbies like me in that case, that they want to be maintainers of some packages. When I was volunteering for the Ubuntu bug squad, I remember that they had a panel that you can go as a noob basically and say, you know, I want to contribute to this project, I don't want to do translations, I want to write some code, they want to be a maintainer, but I have no idea from where to start, I need some easy tasks. And with this project, since you aggregate all the problems that might happen in the packages, you can classify them as easy things to do, like trivial things to do, and somebody you can give it, you know, take this and fix it. So there might be a collection of easy to solve problems for the people who are passionate in Ubuntu as maintainers. And then we probably can have like a comparison. We can have a live comparison of the status and the quality of the installation of packages in SUSE, Fedora, Ubuntu, Debian, you name it. You can convert them and we can also check live what version of packages they have. For example, I see people in forums say, we have the latest GNOME, no, no, we have it, but yeah, but we have this and that. So with this one, if you have all the logs there, you can actually take this information there, you can have a graph and everything. So I will talk a little bit about the future plans that I have with that and then I will give you a demo. So I will try to put this in Kubernetes to have it in a fast way, like you have a function, you say test this package, this package gets tested and I get the result if it is installable or not along with the log of Zipper in that case. Really, really fast. And then I would like to experiment with that, but now that Markus told me that he doesn't like this, you have to reconsider. Maybe with Cata containers, Cata containers and Google device or would be the thing that I would like to play just for experimentation. This is a fun project. Then I will try to learn a little bit about packaging and build service and see if it makes sense to put it there and how it works. And again, what we said before, entry point for Nubis and comparison among other distributions. So I am basically planning to have a website for that thing that you guys visit and you see results. So that's the presentation. Let me give you a demo with that. So what you have to do is clone the repository. The repository, I have already cloned this in that case. So let's go here. Let me open my browser. So here, if you go to GitHub, I basically have a tutorial, MD for the people who would like to try that. And also in the very front page, I have a logo though. So it's professional. You can just go into this folder, Docker in that case. So we go to, let me zoom. And here, there is a bunch of scripts. Let me see how much time we have. Okay. Another 10 minutes. So you can do test it. It should be simple. Like that. It will not infect your packages. And you put the name of the package that you would like to test. So what's happening in that case? In that case, there is an open-suzet with container downloading in my machine, running up, running the zipper, install Vim, as you know, with non-interactive accepting licenses and everything because I can't interact with that. As soon as this is finished, it will kill completely the container and remove it from my machine. But it will take the log from zipper and store it locally. As soon as this log is present in my system, file bit, here is going to collect it and send it to Logstas, which is a storage for enterprise logging. Logstas will do some purely idiotic filtering that I'm doing because I don't know any zipper or parser for that. So I'm just taking the whole thing. And it sends this to Elasticsearch. And I have Kibana, which is the web UI for Elasticsearch in case you would like to have some graphs and stuff like that. Perfect. So we see here that Vim is installable right now. That's a live test in the Tumbleweed. So if I open, for example, that's interesting, opening Vim. Yep. You see this is the complete zipper log. So in our open-source containers, that's what will happen. Six new packages will be installed. And here is the log of the whole thing. This now goes to Kibana, as I said. So... Yes. Here you can see that file bit took the file, send it to Kibana. And I have here the whole log in case I would like to have a central database or have a website, a web server that feeds the content from that database. Okay. And I think that I have something here that you might like. Yesterday, I ran this in my workstation at work. And I can give you some results for the packages that they are no-arts architecture. Unfortunately, the 64-bit packages are about 19,000. And this takes about one day and a half. And since I started it yesterday, it's not finished yet. The only thing that got finished is there are no-arts packages that they are roughly 15,000, I think. So I'm going to take this file and put it into my project in that case. And let me unpack it. So now we will see file bit getting crazy reading 15,000 logs. This means like since yesterday, I've managed to run 50,000 individual containers with one package per container. So we know exactly what's going on in our open-suzet tumbling with container images in that case. Let me, since this is sending the stuff to the database, let's use the parser to see some stuff here in the terminal. So you see here the timeout. I was a little bit too happy about that. And I was like, okay, I have six core machine at work. Maybe I can run 200 containers at the same time. Yes, I can. But what happened is that my internet connection is not that fast. So basically, I run into the problem that I have a hard time out that I don't want to wait forever for a package. There might be packages, if we check those packages because the open-suzet container image is pretty, pretty small, there might be that if you try to install one, it will try to install three gigabytes of packages. Theo loves, I know he loves. So what happened is, yeah, internet connection problem. So just some things here. We see that some packages here could not be able to install. So if I do like grep for that, we can see those packages. Python Django, network manager, novel VPN, this looks like a very old one. Copana web hub, and stuff like that, and what is also interesting is from what are installed in that case, I have like 1,500 wrong permissions here, something like this. So if I search for that, I get that, like setting, blah, blah, blah, trusting wrong permissions. I'm not sure if this is a bug or not, but this is something that I get. This is not a bug. Then why it says wrong permissions? Why there is the word wrong? I mean, yes? Because the permissions file is a configuration. All right. So these are not bugs that's good to know, but at least what I'm trying to say here is that the parser checks those. In case you need them, to check them, the parser checks those. And all those things are now in the database. Hopefully, all things are now here. Let me refresh that. Okay. So, yes, everything is here. All the packages. Yeah. And you can go to visualize, create a visualization, create a pie chart, and here you can have your metrics like these are all the packages that we have, these are the packages that fail with that error, these are the packages that fail with the other error, and so on and so forth. So you can say, okay, classify them and give them to your user to fix them. So that's the project so far. It was really fun for me and I received good feedback, so that's why I'm here. Maybe it's interesting for you. You have some ideas to tell me how this can be improved. Any questions? Yeah, let me pass you the microphone over. This can open like this. Okay. Yes, I didn't dare to press the red button. So, whenever you submit a package to open to the factory, there's an install check, but of course, only for that single package and of course, it could be that it breaks another package that's already there. So maybe these are the errors that you can find with your approach. But on the other end, when we have like a daily tumbleweed snapshot and you say that this takes like, you know, about a day, we cannot run this test for every tumbleweed snapshot. Do you have a suggestion when to do this? I don't know. Maybe we can write it for Leap. I have no idea if it makes sense. Actually, we spoke about that already in the past. You could only test the packages which changed and you could talk to a WebMQ which is fed by the build service, which packages are changed and you only need to test those. One thing that your regular test would find is packages which are fair to build and are now broken because their dependency has changed. So you would still need regular tests like once a week or once every two weeks. But if you only want to test in the tumbleweed time frames and test the packages where you get successful build events or publish events. All right. Okay. We have one minute. I would just like to say that this thing works with system dn spawn container also. So that was another experiment. You can find details how to do it. Any other questions in the room? Yes. No. No. Okay. So no questions. Thank you very much for being here.
Testing package installation using containers In Tumbleweed with have roughly ~25.000 packages for 64-bit architecture. Do you know how many of those are actually install-able? From those who are not, do you know the reason behind? Do you know how many of those will become install-able if boo#123456 gets fixed? And from those which can actually be installed, do you know if there are any glitches at the post-installation scripts? -Sure, we have openQA, but still, it tests only the packages inside the DVD and not the entire ecosystem. - Sure, we have the OBS. So, everything that gets build should also be install-able. No? -Sure, we have libsolv techniques that can answer this. But have you tested if the results reflect the real world? There's only way to do verify what's really happening: one system per package. Yes, that is extreme, you would probably need 25.000 virtual machines. But ... hold on... what about using containers? Well, I have an idea! I have developed a project for fun, and I would be delighted to share it with you. Egkatastasis (you can call it *egg*) is an open source system for testing openSUSE container images providing basic mechanisms for installation, log analysis, and metrics visualization of every package contained into the official repositories. Egkatastasis tests production container workloads at scale using Docker and systemd-nspawn, combined with the best-of-breed ideas and practices from the community using Filebeat and Elastic Stack.
10.5446/54546 (DOI)
All right, I guess go ahead and get started. So the talk's obviously repository priorities for the real world user. So I guess my motivation for making this talk is seeing people in IRC and some of the other support channels have issues that I believe could be resolved by using priorities on repositories. So first let's go over the goals we're hoping to accomplish with this talk. So I want to cover why you might actually add extra repositories beyond the default ones. Some of the pitfalls you can run into when using different types of repositories without priorities and how you can avoid those issues by using priorities. So first of all, let's take a look at why you might add repositories. So the first one, which I think is probably the most common one, is you want packages compiled with different or additional features, something like Pacman or something like that. Another one being you want packages that are not available already in one of the main product repositories. So you might add a develop project or someone's own project. Otherwise, you might add four pre-release packages either again in develop project or somewhere else. Or you may be maintaining packages and you want to easily install them on your machine after you build them on OBS to try them out. So there's some reasons why you might add repositories. So let's take a look at a fresh install and we'll go through the first reason and kind of cover what I imagine a lot of people already do. So let's go ahead and add Pacman repository on default install. So hopefully you're familiar with adding a repository with Zipper. And then another next step being to dump from that package or the repository, which would switch all your packages to Pacman. And then an example here of trying to install Blender, which I believe shouldn't be in the default install. So then the question arises, where does Blender come from? Because it's in both Pacman and it's in the main product repository. So does it come from Tumbley or Pacman? And the answer is not necessarily obvious. So basically here's an example where we have, these are real versions I pulled from those two repositories. You can see a big long version string. So in this case, it would come from Pacman because the build number is larger. So basically it's always looking for overall larger number. So let's say that we had a new release of Blender that hits Tumbleweed. It hasn't yet propagated to Pacman yet. So if you were to be installing and you happen to hit this scenario, you would have the 3.0 version in Tumbleweed and not in Pacman. So in this case, it would come from Tumbleweed. So there's no clear rhyme or reason. So it's kind of, do you feel lucky? Where is it going to come from? You can obviously check this yourself manually, but generally you usually add repositories with a reason in mind. So let's talk about when we're dumping with allow vendor change, which is when a lot of this comes from. So this used to be the default behavior that would come out of the box, but it would mean that if you added something like Pacman, depending on when you decided to update your machine and the state of Pacman and the product repository, the packages might flip flop because of those build division numbers or the versions of the packages themselves. And that can cause a lot of problems. You could actually have weird splits of packages from both repositories. Going forward, that's been disabled. So the default now is to not allow vendor change by default, which prevents the flip flop, but you don't necessarily resolve the question of where does the package come from if I don't manually pick it. This is obviously further aggravated if you add more things besides just something like Pacman if you add develop projects or your own repositories. A lot of times, those repositories will have different versions of the same packages, which means that package will be available in multiple sources and you'll have to make a decision as to where it comes from. So the question is, can we communicate why are we adding a repository to zipper in such a way that it would make the right choices to where to take the package from? And the answer is yes. You can use repository priorities. So priorities allow us to specify an order of precedence for the repositories, which means which one to basically use over the other ones when they both have the same package. So by doing this, we eliminate the flip flop because we'll basically always use whichever repository has the higher precedence. Which also gives us consistent package sourcing because if we know that a package is in multiple repositories, it always comes from whichever one we have the higher priority. So again, it doesn't switch and it picks from the same place. It's also nicely documents on your own machine, kind of what your reasoning for is for adding repositories because a lot of times they can be there for a long time and you may forget. So helps with that. And as we'll cover more, it nicely allows kind of a hands-free switching to the product repository when packages become stable. So I'll go through that example some more. So how to use zipper repository priorities. You can either set them when you add a repository, the example there, or you can modify them after the fact. So in case you're not familiar, the default priority is 99. Lower is more important. Higher is less important. So basically one would always win over a 99. So it's an ascending order of importance. So here's an example of what your default setup might look like on Tumblyd where you basically have the three main repositories all enabled with default priority of 99. So now let's go ahead and we'll revisit our Pac-Man scenario where we add Pac-Man. But this time we'll add it with a priority of 90. So that being more important than the default packages, which is typically why you would add Pac-Man since you want it to replace packages with those compiled with additional features. So now we don't actually have to specify to dup from Pac-Man. We can just say dup in general because it will go ahead and pick any packages that come from both from Pac-Man because we told it it's a higher priority. So here again is the example of installing a package that wouldn't exist in the first place, so Blender. So then we have the question again of where does it come from? But this time it's easy to answer. It comes from Pac-Man because we added that with a higher priority. So again we'll take a look at the example where I gave earlier in Flux where there was an update in Tumblyd that hasn't yet propagated to Pac-Man. So in this case the version number is clearly larger for Tumblyd than it is in Pac-Man. But again it will come from Pac-Man regardless, which is actually what we want. So there's no flip-flop and clearly go through it. So next I'll cover a case study that I had personally which was using KDE Connect. So it was not yet in the official repositories when I started using it. So I actually added, I had already the KDE Extra repository and I added the KDE Extra unstable. So you'll see I added them with priorities of 105 and 106, so those being less important than the default priorities, which means I basically only want to use them for packages that aren't in the main product repositories. So I would go ahead and install KDE Connect in that case. And then initially it came from the unstable repository because the KDE maintainers only had it there since it had no official releases. Some months later it had a pre-release candidate, so the package was moved into the KDE Extra repository and nicely automatically for me when I went to dump my machine there was a change in repositories to that one. I didn't have to do anything, but I basically got the more stable one. Basically you might want to continue using the nightly builds, but normally that's not what I want. And then a few months later had official stable releases, so when I dumped then it automatically switched to the standard repository. So to review, basically we can automatically switch to using the main product repos without having to do anything. So again, this took like five months or something to occur. Nothing I had to manage. I just noticed it when I went to dump, which was kind of nice. Everything's dealt with for you. It also means that we can add lots of extra repositories. So in fact you could just about add all the repositories on OBS with lower priorities and in theory they wouldn't completely hose your machine because you wouldn't use anything from them if it didn't come, if it was already in the main product repos. So that's kind of nice. So let's bring it all together. How do we actually make use of all this? So the key here is that we want to always dump with a lot of interchange and we want to have priorities set on all our repos. So if you don't have priorities set then obviously dumping with a lot of interchange can have undesirable effects. So these are the ways that you can dump with that either whenever you run the command or you can change the config. And this basically gives us everything that we want. So we automatically utilize new packages when they're added to Pac-Man. So if there's some package like Blender that comes out with new feature and it can only be compiled in Pac-Man then we automatically switch to that. And when they're dropped everything babes basically the way you want and extra repos, all the ones with lower priority are basically only utilized when you have to. So let's review the reasons why we might add repositories again here and basically assign them priorities. These are priorities I use. The key here is more or less important than the defaults. I generally leave space. So I basically increment by five. That way if I have interesting cases that need to fit in the middle I can do that. So for example, compiled with additional or different features that would be Pac-Man. So I would add it with priority of 90. So it would be more important. Our repository that contains extra packages 105 so that was like the KDX extra ones. And then again, pre-release or things you want to test yourself. They're higher priority than the defaults but I don't generally want to override Pac-Man in this case but you might. So for example, this is what my desktop looked like before this presentation. Obviously I have a lot of other repositories but basically just there. So if you run list repositories with the P option you get them sorted by priority which is nice. That goes back to that documenting why you might have added these. So for example, the network and open age repositories I added because I wanted a package out of those that wasn't available in the main product which is why they are lower priority. So again, it's very easy to tell when you look at that list. Of course the standard caveats apply. If you're installing things from the extra additional repos they may or may not work. Obviously things in developed projects are in flux so packages may be broken and zipper priorities doesn't protect you from things that are broken so I always look at the dump output and make sure it's the same proposal. And that's the basic idea. So from personal use before the changes to allow vendor change where all that was changed in Tumblr this resolved issues with Pac-Man and other things like that. So I used to have issues where you would have to resolve package problems when you have lots of repositories like that and I haven't had issues since doing this several years. So any questions or comments? All right. Then I guess that's it. So thank you. Thanks a bunch.
Use additional repositories with confidence The topic of additional repositories comes up on a regular basis. The official position is to submit everything to Factory to avoid the issue, but for a large number of reasons this cannot always be the case. As such users living in the real world have to navigate the unsupported landscape. By far the best approach is to use repository priorities available through libzypp, but unfortunately this is not well known. This quick talk will cover the basic usage of priorities, strategies for real usage, and examples of how effective the workflow can be. Additionally some pitfalls will also be covered.
10.5446/54548 (DOI)
Alright, welcome to my talk about let's encrypt in general and specifically on OpenSusum and why you should, in case you are, no longer be afraid to use it. So before we dive into the topic, let's briefly introduce myself. My name is Daniel Morkentine. I live in Berlin and I work as a slash core developer at SUSE. I am a bit interested in practical privacy and security, which is why I'm giving this talk and why I also introduced let's encrypt into OpenSusum. And yeah, well, other than that, I'm part of the team that helps making recording of conferences like these. If you saw the description, I said the title had the road runner in it and it's not a project as more as something that gives a theme to the whole thing and you will later see why. Quickly I introduced the road runner because you know how it's always, does everyone know the cartoon character in here? Who knows the road runner from Coyote and the road runner? A couple of you, okay. So he's always the lucky guy who's chased by Coyote and he'll survive. He's going to be fine in the end. And we'll see how this connects, but essentially I've taken this as sort of a project name for having a real easy, quick and fast to get running kind of solution for TLS certificates using let's encrypt and OpenSusum. And I really want setting up TLS to be as fast as that road runner is running away from Coyote. So what should you expect from this talk? I wanted to show and I wanted to know how to TLS secure or which services to TLS secure and essentially spoiler alert here. Like you can essentially now TLS secure any service. And the challenges that TLS or deploying TLS brings and how let's encrypt helps you solving them. And of course, let's encrypt is, you know, that thing that everyone tells you to use now, but how we're going to go. We're going to take a look into that as well. And finally, I'll explain how I think we can still improve in OpenSusum in making it even easier for you and other users to actually use it. And I hope you can make an informed decision on how to go about TLS deployment. And that is the obvious slide that briefly sums up TLS used to be called SSL long, long time ago when it was a proprietary standard by drawn up by Netscape. In the meantime, it's a 9TF specified standard. And if you check below there, Wittich-Leff has given a very good talk just I think two hours ago that is already online. And you should really check out the latest TLS versions, but for now, the version that we support is TLS 1.2, which is an approved ITF standard. And the purpose is to secure existing protocols in transit. So between two ports, you have a TCP connection and instead of directly sending HTTP, you first establish an SSL connection and then use TCP, SMTP, IMAP, whatever, over that now secure line. And essentially, what makes this work is the chain of trust model. So that means that there is certain entities that in the end you have to trust to make this work. These entities are called certificate authorities and they have something called root certificates or root anchors and we're going to talk about them later as well. So fair question, why should you even bother? First of all, the panic has been all around the world and particularly in Europe. The new data protection laws that essentially say you have to make a realistic effort to secure your data in transit. Then there are certain things, especially if you're dealing with credit card companies that will make you adhere to certain policies and they will send auditors to make sure you adhere to them. And again, this may be part of that. Sneaky spies, it's known enough said, but really more realistically it's these guys. Like sneaky script kiddies spoofing a Wi-Fi, you think, hey, free internet and there go your data. And this is really the most likely scenario, even though we all think, well, we need to protect ourselves from the NSA, whatever, really in the end, it's these kids. And by the way, whoever comes up with these stock photos, really, no. So there is still not an answer to why do we need a free and automatic certificate authority. After all, there is an existing system, right? And to motivate that first question, why does your OS or browser trust your trust certificate authority? Does anyone know? So how come there's this list of certificate authorities in your browser? They declared themselves secure? One answer I would say the browser or operating center vendor vetted them. And that is sort of true, but not really. The better truth is that, that they are really paying a lot of money to a company. And that's the point they are paying a company to compile them an audit report. And with that audit report and certain other things, they then go to the browser vendors and say, see, we can be trusted. Maybe good idea, maybe not. Because there were quite some CAs that were breached and the most prominent breach is Digi Notar back in 2011. And then we had the VOSans.com disaster who actually breached the policies set out by themselves along with browser vendors and other CAs, the so-called CAB forum. And finally, just recently, Symantec with known and really common brands like GeoTrust authority, very sign just last year. And over that, they lost all browser trust and essentially Symantec is now out of the certificate business for better or worse, probably for better. So yeah, a lot of people need to go somewhere else and dump your money. The question is, do you really need to dump your money somewhere else or maybe can you just use Let's Encrypt? And the main argument of the certificate authority, the classic certificate authority, is like, oh no, you know, what we are selling you is identity. Because you can really prove that you are awesome incorporated and not just some weird guy who is trying to steal credit card information or whatever. And the interesting answer is, you may know that if you have these, if you go to some online shops, you have next to the URL bar, you know, trusted web shop incorporated in green letters. So that's supposed to make you feel really safe that you are really talking, really talking to that entity. And officially they say, yes, we will check your business, that it really exists, that it is registered, et cetera, et cetera, et cetera. Unfortunately, it's quite easy to spoof because you can just set up the same company in a different country that has proven to work. Or you can just, you know, register a company and again, registering a company is like $1 or something in the worst case. That has ambiguous names like secure.ltd and everyone is like, oh, cool, there is the word secure in green letters next to the URL bar, so it's really secure now. But that has no meaning. If you do a poll and ask people if they know why exactly there is sometimes a company name in green letters, what it means and what it, if it's really different trust level to them, you will see that they are utterly confused and they have no idea. Actually there was a paper, I think, two or three years ago that did exactly that and tried to poll public opinion about what that actually meant and the result was quite disastrous. So these so-called extended validation or organization validated certificates, their value is doubtful at best. So that's encrypt will not issue any extended validation certificate, they will just issue the main validated certificate, which means they just validate that you control that domain, nothing else. And finally, if you ever bought a traditional certificate, then you would get, if you had a good reseller, then they would give you set up guidance for the most popular web servers or email servers. But it was still a pain making good choices because also these set up guides were sometimes outdated. So making good choices in terms of what sort of crypto you deploy along with these certificates was really a problem and every one to four years you would get something like that because someone else, someone who was in charge in your company of renewing the certificates for God at least one host. So and the solution to that was, okay, next time we are not purchasing the one or two year certificates, we are purchasing the insanely expensive four years experience, which saves you that is, which basically means you will get the same embarrassment, only you have four years time to prepare for the next time and you won't do it anyway. So after I hope I made it clear that you don't want traditional certificates anymore, let's now see what Lits and Crypt can do for you and what it actually is. So we've established we needed a new kind of certificate authority, one that is fully automated. And once it's fully automated, the cool thing is that we can actually have reduced certificate lifetimes, it's suddenly completely okay if a certificate is only valid for a couple of weeks because it's all automated, right? So what happened is that the Electric Frontier Foundation, Mozilla, Akamai, Cisco and others went to found the Internet Security Research Group, it's currently funded through donations from individuals and companies and this group runs their own root certificate authority. But of course the problem is, excuse me, of course the problem is how do you get this everywhere, right? I mean establishing a root CA everywhere in all browsers, in all operating systems, in Java, in I don't know what, it just takes some time and it took, I think, Marcus correct me if I'm wrong, it took us until six, like four, four, two or four months we have those certificates in now, yeah. So it took us that time and it will take others even longer. So if there are no root CA's from Let's Encrypt, how is that even, how is it even a real alternative? And it's a real alternative because another CA called IDENT trust that has been in browsers and operating systems forever has agreed to cross sign with Let's Encrypt, that means that until the Let's Encrypt certificate authority is everywhere, you can essentially just use that cross sign and that means that all certificates are automatically already valid. And at this point, again, most popular browsers have already picked it up. And the way that Let's Encrypt works, I know that there is this legal entity, but I also said it was automated. So what does automated mean? Automated means they established a protocol called ACME. The automatic certificate management environment. And there were two versions of it. The first attempt is now deprecated. The current version is ACME v2 that has actually been submitted to the internet engineering taskforce for standardization as an RFC and it supports wildcard certificates for the first time which wasn't supported before. There is a server reference implementation that is also currently running at the internet research security research group that is called Boulder. And again, you see the pattern ACME is that company that would send all kinds of gimmicks to Coyote to chase the road runner. And the boulders were the ones that were usually used to try to kill the road runner. So that's the general pattern. We are in this comic thing. And so you will see that everything, all the software that is there refers to some characters or some things in Coyote and the road runner. Then there are a couple of client reference implementation. The most popular is CERDBOT which is maintained by the internet frontier foundation. It's written in Python. Unfortunately, it's fairly complex both in packaging and maintenance. So there have been alternative clients most prominently here, Dehydrated, in Bash, ACME 2.0 and Acme4j in Java. But there is a lot more of these tools available that are usually special purpose. So how does it actually work? What you do with ACME is you prove possession or ownership of a specific domain or host name. And there are essentially two strategies these days. Either it's HTTP01 which basically describes a mechanism where you claim, hey, I own this host and then the ACME server on the other side basically says, okay, prove it to me. Here's your challenge. If I find that challenge on a specific sub-directory of your web server on port 80, then I generally believe that you are in control of that domain name. And the other alternative is DNS01 where the assumption generally is that you could also, as means of proving control, provide that proof, that token and it takes T record in the DNS zone. So both is possible. If you however want wildcard certificates, you must choose DNS01 and we'll later see examples of that. So how did we approach that in starting leap 15? We adopted dehydrated actually. So that's the bash-based implementation. The good thing is it has very few dependencies that can be relied upon because they're just always there. It's flexible and extendable so you can write hooks and you can use shell for that and from shell call anything else. And again, we'll see examples of that a bit later. And there is an extensive set of examples and integrations both in the documentation that we ship and on the website of dehydrated. In addition, we've tried to come up with a really good package. That means we added sub-packages that add integration for Apache, Litee and Nginx. We provide transcripts for older distributions and simply timer for newer. We also provide post-run hooks that are executed as the root user. So usually everything is of course run as a separate user. But if there are things you need to do like restarting a service or something, then you don't need to do any fancy pseudo elevation. You can just do it in a post-hook. And then there is a config directory scheme where you can override the default config file without ever really touching it. Okay, so let's see how you can actually do this. So assuming a normal Apache setup, you just install Apache dehydrated and dehydrated to Apache 2. You register a contact because you need to create a... it's called an account, but what it really is, is you get a key. And if you register for that key that identifies you, you can optionally provide an email address. You don't have to. But the benefit is that should your automation ever go wrong, and let's encrypt doesn't receive an attempt to update the certificate or retrieve a new certificate, and at a certain time it will warn you that you haven't done so, and then you can check your infrastructure before things go bad. So you should always specify a contact email. Then you claim your host name, and then you run dehydrated for the first time. Then you can just call the cron parameter because dehydrated is going to check if you're running as root. If you're running as root, it will demote itself to the dehydrated user. And if you successfully retrieved your certificate, then you can enable the dehydrated timer or make sure that I think the cron drop even runs by default. And now the only task that is still up to you is you yourself have to go to this fence page here, unless you know... Sorry. I can show you right here. This page essentially gives you guidance on how to configure your virtual host in a patchy and in a compatible way. Because again, none of you are probably specialists in that field if you are good for you, but most are not. And you can actually tell it, okay, I want to be really secure. I want modern, or I want to be compatible, but not maybe provide support for slightly unsafe mechanisms. You can say, okay, I'm running in a legacy environment and I have to be compatible, and you can choose old, and then you can choose your web server and you're done. Okay, let's switch back. So what does this package do? This dehydrated Apache 2 basically creates a file in conf.d that registers a dot well known dot well known is by the way RFC specified place for such things. And that's actually where the certificate authority of Let's Encrypt will actually assume to find your challenge. And that's just being mapped to Valib Acme challenge just to have a local point of storage. So now the more complex scenario, because you're likely, if you're just your typical home so that meant this will probably be enough. But if you have really complex environment with multiple servers or assume, you know, like cloud services, whatever, then you will probably need a couple of certificates for a couple of host names. So what do you do? This is like the primary case for the DNS method. So what you do is you create a host that is dedicated to actually acquiring certificates. And you add all these certificates that you want to obtain. And you run this. And I'm actually going to show you the configuration for this slide later. And then you distribute the certificates to all the hosts that you need with SSH, with salt, with whatever. So this is basically using sort of an intermediary host on your local end to distribute all the certificates. And this is basically how it looks. You need access to your DNS server. Maybe you can use, maybe you can use NS update. If you can't, there are also scripts available for the most popular DNS providers, because they usually have an API. And what this does is essentially it provides a challenge. And after the CA has successfully validated this challenge, it will remove the challenge from DS again. And yes, that means that this dedicated host will need to have access to your DNS. And this is why you should compartmentize this functionality in a single host and not have every host do that, because you probably don't want every host to have access to your DNS infrastructure. And the cool thing about this is that now, your internet, your LAN is probably the other way to see that there is an internet is because if you visited with non-company servers, it's the only servers that will give you certificate errors all over, because you maybe need to deploy a local CA, or because you're not deploying a CA at all and run with a default certificate, so it will throw an error anyway. And now you don't have to put up with this anymore, because this zone itself needs to be visible to the outside, but you can use internal addresses, whatever you like. It doesn't really matter. But now you can acquire, automatically require, sorry, you can automatically obtain certificates for devices in your local LAN and distribute them, even to things like LOMs or whatever. And this, if done right, makes your internet as secure as your ordinary website in terms of transport security and validation safety. The only warning is you should be aware that every certificate, so it doesn't matter if you use Let's Encrypt or any other commercial certificate authority, there is a new policy in place that every certificate needs to be publicly locked. This is a process called certificate transparency, and that means you're leaking the names of your host names to the public. If you're not confident with that, if you're not happy with that, if you are afraid that security through security may be a good idea, which maybe in this case it is, if you don't give external potential attackers too many hints on your internal infrastructure, there's a reason why you're probably not allowing everyone a zone transfer and DNS right away. And this is a bit like a zone transfer. Then you may consider only requesting wildcard certificates and distributing these. Again, this is up to you. And does this work? And the good thing is, yes, it's proven to work because we've rolled this out, like exactly the scenario, or pretty much the scenario, a bit more complex on OpenSUSE. And I'd like to give a big hand to the OpenSUSE heroes team. Can we have a big hand, please? Thank you because they made sure this actually works. I could only test small things, but they have an entire infrastructure to throw these things on. And Daryx especially beta tested the packages and gave me a lot of advice. So this is really cool. So I can confidently say we have at least one infrastructure where this is known to work that is non-trivial. So thank you. Yeah, the future. We've now improved, but it's still like for just one server, it's still fairly complicated, right? Because ideally what you want is just to receive a certificate. You don't want to care. And there's two things that I had in mind. The first was a just module that I started together with class in the Hack Week last year. But people convinced me that this may not actually be much of much help because ultimately most people would deploy it to Salt and other things anyway. And in the end, we decided to put it aside. And first concentrate on the CLI user experience. If you have a different opinion, you can come to me later on and we can talk about it. It still exists. Then integrated renewal. This is what you really want. There is mod MD in Apache for the trivial cases in particular, where you just create a virtual host. You just say SSL engine on and bam. You don't have to care about the rest. The rest is all taken care of in mod MD. You can tweak a couple of things like how modern your set of crypto should be. But in general, it's your ciphers and everything. But in general, there is nothing more than to say SSL engine on and it will deal with the rest. And the problem is it's there since, I think, 2 for 30, which means it would be in the code base that is also in lead 15. But it's not enabled. And I suppose I haven't spoken to the maintainer yet. But I suppose that's because the upstream documentation says in big letters it's experimental and ECME1 only. So I think it's the future, but it's not there yet. And there are similar limited features available for self-renewal for HAProxy and Nginx. And if you prefer to use Go, there is Caddy and there's traffic, both of which are web servers written in Go that contain ECME-based certificate issuance as an integral part. So that pretty much brings me to the end. So I hope you've seen that you can reliably and automatically obtain and distribute SSL certificates. It's still not easy enough. I'm still not happy. So the road runner can still run faster. And at this point, I really depend and would appreciate your feedback and your contribution. Here is the repository in OBS to dehydrated. And yeah, please feed the road runner. I'd be really glad if we could have a chat in the end. And if you have any question, here's your time. Thank you. Thank you.
While the need for encrypted web sites has been sufficiently motivated by countless revelations on state sponsored surveillance or malevolent ISPs, acquiring a LetsEncrypt certificate used to be a tiresome business, and usually certificates broke anyway. openSUSE Leap 15 will be the first long term distribution to provide automated certificate requests and renewals thanks to dehydrated, which is also available for older distributions via OBS. This talk will show how to quickly acquire certificates for a single host and ensure that they will be automatically renewed and how to orchestrate certificate renewal for a whole fleet of servers and services via DNS. Finally, we will also look into further and future simplification for single services, such as Caddy or Apache's mod_md.
10.5446/54665 (DOI)
Hello, my name is Anton Smorodskiy and today I will do a short introduction into framework for monitor and cleanup cloud service providers. Public Cloud to watch. Let's start. First of all, a few words about myself. I am working in IT since 2005 using Linux's main tool for work and funds since 2007. Main areas of interest before joining Susie was Java and automated testing, mainly in area of online retail stores. After joining Susie, focus shifted to Perl, Python and testing of Sli and OpenSusi. One of the areas is testing of Sli in public cloud. And currently my favorite distro is OpenSusi Lib. So I am installing it everywhere I can. We will start from stating the problem which Public Cloud to watchers is trying to solve, then switch to the tool itself in going through its main features. Also we will speak about internals of Public Cloud to watcher. Next we will speak about HashiCorp Vault, what this tool generally for and about our very specific use case of this tool. And we will discuss how we currently maintain running instance of Public Cloud to watcher. What setup we have done to keep it running and not worry about any potential environment breakage. The last topic will be about future plans for this project. At my daily work, one of the main things what I am doing is testing how Sli behaves as a virtual machine in different cloud service providers. Azure, AWS and GC at the moment. All testing related to Public Cloud providers is happening in OpenQA in automated way. I assume majority of people who attending OpenSusi conferences know at least something about OpenQA. For those who don't know or want to know more, I will recommend to visit Open.QA or find some nice talks about OpenQA in previous conferences. I am sure that there were more than one talk like this. But let's get back to our main topic. So OpenQA uploading Sli image into dedicated cloud, then creating VM using this image and then running some testing against it. And after that trying to clean up after itself by deleting all created entities. Keep in mind that I said creating VM, but actually it's much more than that. Sli service VM, every public cloud provider creating a lot of different entities, subnets, resource groups, disk and disk images, etc. After successful test execution, we have logic which clean up all created entities. But of course there is plenty of options how things might go wrong. And regardless that we keep improving our code to do clean up always, there is still room for some unexpected behavior from OpenQA side or from provider side which would not allow to finish clean up. All what I just described needs to be multiplied by diversity of public cloud providers. Like I said, VM creation usually means creation of some additional resources for this VM. And every cloud service provider have their own understanding what kind of resources will be created and how they will behave when you trying to delete VM because some of them will be automatically clean it up but others know and there is a lot of different cases when this behavior is differs. So basically whenever something goes wrong in OpenQA, you not simply check that VM was deleted but also check some other entities and you need to keep in mind all differences of current provider. Currently we starting around 300 VMs daily in OpenQA in all three providers together and this number will suppose to grow in nearest future. So obviously there is no chance to keep all this in manual check up. So we need to have some logic which will double check that our test don't burn extra money. Also it would be nice to have it outside OpenQA to not get into a trap that same bug which would invalidate clean up in test level will do the same with our less chance clean up logic. Also it would be nice to have everything in one place to ease maintenance. Another problem not directly related to what I just described is coming from the fact that creation of VM requires credentials which you need to bypass to the test in secure way and make sure that unattended person will not be able to use them. To address first problem we created public cloud watcher. It's published in GitHub so feel free to use it, learn from it or contribute. It's written in Python using Django framework. Django was selected because of the cheap way to get web UI. Public cloud watcher monitor clean up and notify about leftovers in supported providers. Currently we support three cloud service providers. Microsoft Azure, Amazon AWS and Google Cloud Enterprise. For each provider it using native Python API bindings from provider to communicate with the cloud. Public cloud watcher is currently used by several teams inside our company. Each team has its own credentials to access clouds, its own images naming convention. Its own flows which images need to be deleted and when. This is why in public cloud watcher we have notion of namespaces which allow us define these differences per team. Also in some cases public cloud accounts may be used not only by our open QA automation but also some other people and other automated workflows may create resources in the cloud. And we should not touch them obviously so we need to implement something smarter than delete everything older than next days. Do not interfere with work happening outside open QA. For this we decided to use feature available in all three providers. We decided to set tags on the resource. So when open QA creates VM in its VM it settings two tags open QA created by which generally give a hint for public cloud watcher that this VM should be monitored and open QA TTL which store amount of time after which public cloud watcher may delete VM if open QA fail it to do so after the test run. Internals of public cloud watcher may be logically divided into several groups. One group represented by classes responsible for actual interaction with providers. It contains dedicated classes for each provider which holds all provider specific logic. Each class knows how to authenticate with certain provider how to query for resources and which resources actually needs to be queried. Another group responsible for actual VM cleanup. It has process which periodically loops over all providers in all defined namespaces and collect VMs according to tags which I mentioned before. Then this VM serialized into local SQL light database. Then it tries to delete VMs living together, sorry, living longer than TTL defined for them. And at the end it send email notification about VMs which was not deleted so potential human involvement is needed. Also there is a group responsible for cleanup of everything else. This group holds knowledge about specialties of every provider and what kind of additional entities are created together with VMs. To clean up them separately because there is a lot of cases that after VM deletion provider not delete these entities. All this cleanup and notification flows can be turned on and off on global level or per namespace. Such configuration stored in PCW e-file which public cloud watcher reads on startup. Also local SQL light database with cache list of VMs can be accessed via web interface which allows to browse through list of VMs and do some basic search and manually trigger deletion of some VMs which was not yet cleaned up automatically. Now let's discuss second-raised problem, credentials. After some discussions around the problem we realized that to provide really secure flow to operate credentials within OpenQA we would need to build pretty complex flow which still might be broken easily. So we choose another path. We decided not to hide credentials at all but define short tl and very limited permission set for them. To achieve this we pick the project from company which created Terraform which we also used quite a lot in the internals of our testing approach. The name of the project is Vault. The project page describes it as a following. Secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using UI, CLI or HTTP API. Vault also has notion of namespaces so it perfectly fit in our model. Another useful feature from Vault is ability to request temporary credentials from provider with certain set of permissions. Vault will keep track on this requested tokens and may request token deletion from the provider. Vault is used by OpenQA tests so they have ability to create, delete VMs and public cloud watcher so it can query our accounts and delete leftovers in case there are any. We using very basic Vault setup which actually describe it as four testing purposes only and not using prod environments. We not using any offensive Vault storage types which would give plenty of options where persistently store credentials. Indeed we using cram storage which means 100% data loss in case of reboot shutdown of Vault. But it's totally fine for us because we generating temporary credentials anyway. Another not recommended way of using Vault is that we allowing not encrypted connections to Vault. But I would tell more about that part when I will describe our prod instance setup. So last but not least let's speak about our prod instance where we have public cloud watcher and Vault running. At the beginning it was just random VM running somewhere. But over time with more people start using it and more tests run rely on it. We start realizing that we need more stable ground. If Vault is down no test will be able to finish because test need Vault to get temporary credentials which will allow create delete entities in the cloud. Also if public cloud watcher is down test are able to run but there is a risk of getting big bills from Microsoft, Amazon and Google. So at some point we decided to move to state where we can recreate whole setup on clean machine within 10 minutes. There is plenty of options how one can back up some infrastructure in the code. Each one of these options has some pros and each one has some cons. Also sometimes it's simply matter of taste. After going through several different options we end up on choosing containers as building material for our setup. The idea end up as separate repository which contains Docker files for public cloud watcher, Vault and Engines, Docker compose file which tie containers together, some bash scripts which automate everything what should be done outside Docker files like installing Docker on the host, getting SSL certificate into host which would be needed for communication with OpenQA, check out of latest version of public cloud watcher from GitHub and etc. Set of vault click commands which to vault configuration from scratch. Set up all needed namespaces and upload root credentials for each cloud service provider in each namespace which vault will later use to generate delete temporary credential on request. Public cloud watcher container based on Python 3 container. Whenever new image is built it will take latest version of public cloud watcher from GitHub and PCVW.ini with details about which namespaces needs to be monitored and how. Also container has attached storage to a low SQL lite database with instances cache, survive container recreation. Vault container is based on official container from HashiCorp and contains small config which just defines in-mem storage and disables web interface of vault. Most of business logic is in bash scripts with vault click commands which will do initial init of fresh vault instance and set up all needed namespaces and secrets. So basically our approach is that we are doing full automated install of both entities, public cloud watcher and vault every time from scratch on clean container but because it's fully automated we basically can do it fast and because it's based on stable container versions we are in a safe mode that we don't get so basically we have reproducible builds at some point. To simplify vault setup but keep it secure we introduce third container with engines. In Jinx is playing reverse proxy role here by encapsulating HTTP into HTTPS and a single entry point which broker request appropriate container, public cloud watcher and vault. This trick allows us to keep vault unsecured from setting up point of view but secure from usability point of view. This is how we work around what vault is not recommending to do by allowing plain HTTP connections. Now let's speak about future plans. First and in my opinion most important we need to improve our code coverage with unit tests. Not that we don't have them at all but let's say coverage is far away from 100%. Another problem which posers me as a person who maintain public cloud watcher is that whenever there is some issue in the cloud for example Azure changed API in non-backward compatible way which already happened several times. I'm start getting email notification about exception every x minutes. From one side I do want to be notified about the problem so I would start acting as soon as possible but on other hand after I have read first email I can skip next dozen which would not say something new. So notification flow needs to become smarter and be aware whether it is some new problem which I need to be notified or it is something what I have already seen. Also currently web interface allow only manage VMs but in effect public cloud watcher is cleaning a lot of other entities. It would be nice to see them in the web UI and have better understanding what actually clean it and when. For now to solve this problem I usually grab in log file of public cloud watcher. That's all what I wanted to say. I will be happy to answer any additional questions during conference or any time after and I would be even more happy if someone will consider using public cloud watcher together with us.
In this presentation I would like to talk about Public Cloud Watcher. Tool used by SUSE SLE QE team to monitor Public Cloud providers ( Azure , AWS , GCE ) for testing leftovers and delete them. I will describe: 1. tool itself ( internal architecture and features it provides ) 2. how we maintain running instance of PCW ( it is deployed in 3 docker containers maintained by mixture of docker-compose, docker files and some bash scripts on top it )
10.5446/54668 (DOI)
Hello everyone, my name is Neil Gompine. I'm here to talk to you about sweeter image builds with Kiwi. Now, Dato's using Kiwi. So first, talk a little bit about me. I call myself a professional technologist. I've been in technology since I could remember, and I've been involved in Linux for nearly 15 years, and as a contributor and developer in Fedora, Magia, OpenSUSE, and OpenMandriva Linux distributions. Note, also contributor to the RPM, DNF, and various related software management, systems management, and image building tools. Of course, that includes Kiwi, the tool we're talking about today, and I'm a senior DevOps engineer at Dato. So a little bit about Dato. We were founded in 2007. We've got 22 global locations, over 1,600 employees worldwide, 17,000 managed service provider partners, and we operate entirely in the channel, meaning that all of our products are not directly sold to you. We sell to companies that take our services and optimize them for their clients' needs, and they sell them to you. We have local offices in nine countries that help service providers and serve over 1 million small to medium businesses around the world, or small to medium enterprises. You may have heard that term instead. We offer a wide variety of products and solutions to support our MSP partners to enable their business to support their clients, and that ranges from disaster recovery, backups business continuity stuff with our unified continuity products, with being able to do managed networking solutions with data networking, professional services automation, and remote monitoring and management with our RMM and PSA products, as well as inventory and quoting, and revenue management with our commerce solutions. So taking all that and going back to building images at Dato. So for building images at Dato, we got a little bit of a problem. We can build images for days. Practically everyone did image building differently with a different set of issues caused by these methods. So some tools we're using is Packer, Debian's Live Build for ISOs, custom shell scripts for making disk images and custom base images and all kinds of weird little things. And we use Vert Builder for making various custom virtual machine disk images that don't really fit into clouds and are used for very specialty use cases. The problem with this is that we wind up having problems that are just odd. And this example is great of Packer just not creating images that are sane because the actual mechanism in which it produces the image is not sane. It goes by pausing the virtual machine and taking a snapshot and exporting it. Well, this kind of was a problem when you're using Puppet to configure the environment and that includes like setting the machines up for getting unattended updates and things like that. And the apt daily service in particular just was always started and running just as we were pausing the machine and that left apt and d package in a broken state when you tried to boot up the machine in a new instance. So we had to do all kinds of things to work around that. Another example is that when the Spectra meltdown fixes landed a couple years back, Live Build didn't really cope with that very well initially. And we had to do funny things to kind of make that work. And that also uncovered another random thing where suddenly XZ compression didn't work anymore, so we had to switch to Gzip. We don't know. And this kind of led to a core problem that we've discovered is that some of these tools have seemingly hysterical behaviors, but something that's also been kind of increasingly common among these tools is that they're poorly maintained and that the actual capabilities of them are incomplete. And these tools don't have, you know, what I would like to call a method to their madness reasoning how they work was too hard. And that make using them difficult because when something went weird in an image build process or in the image produced by the build, it was difficult to walk back and figure out like what happened and what went wrong. And that really eats a bunch of time that could be better spent doing more valuable things like iterating on it to, you know, build more capabilities, layered systems, solutions and things like that. So I started looking at Kiwi as a solution to solve this problem, because Kiwi was straightforward and idiomatic. It has XML, YAML or JSON based descriptions with some simple script books that you can use for other flexibility. And speaking of flexibility, it can build almost any type of image. And if it doesn't know what type of image it is, you can use the Python API for Kiwi to construct custom types of images. So you can build upon the framework and the tooling inside of Kiwi to construct anything you want. And it's free and open source software under the GNU General Public License version three, and it's actively developed and maintained and the developers are friendly and helpful and that was huge to me because it didn't actually work right out of the box. So DATO produces a lot of Ubuntu based images, and that means we have to work with the, you know, making images with the Bootstrap and apt and, you know, there were a couple of issues that I discovered along the way and so I went and fixed them. Like when I went and sent pull requests, they were very quick to respond with feedback and they worked with me on making sure that it was right and then we got it in and made releases. They're very good about releasing fixes as they're being merged in. So that was great. And when we started evolving into like handling some sentos based images, that also was somewhat incomplete. And so I added support for features that we needed that just weren't wired up yet. Perhaps they didn't know or whatever, but like I figured it out and added the functionality. And this particular change was interesting and cool because I didn't really know how to work with, you know, the extensive test suite that they have for Kiwi. And they were really quick and responsive and friendly and helpful. All these things for helping me make the unit tests to make sure that the behavior worked because and they were very patient with me trying to figure out the test suite and gave me hints and such. And that made the huge difference. I was able to be successful and very quickly iterate on this and get this into a state that it could be merged and it was. And so I want to show you a little bit about Kiwi with, you know, a sample that I from one of the descriptions that I've been working with. So we have here a container appliance based on sentos stream nine. And this is working off of test composes that are being released right now sentos stream nine stuff became available last month. And so I started working with it. So this container appliance is configured using the data package manager you can see here. It's English only because that's the only locale I care about time zone is UTC because screw daylight savings time. We have locale filtering and turned on so that means that if there's any other locale data or any other locales that are being installed because of packages they'll get filtered out by RPM. But I also have checked signatures disabled because right now this content is not signed. And so you can't really do anything about that. So I just turned it off. This other flag which is neat is the exclude doc. So I wanted to make the image super small so I don't need to include documentation stuff since RPM has flags for that we just set it. And so all the documentation was just stripped out because it just wasn't installed. And we have a small set of packages here just file system the branding packages and some the package manager and some utilities. And that's about it. And this is just a simple TVZ type image which basically means it's just a tar ball with files in it. And we also should say the this is the shell hook that is used you could actually replace this with calling Ansible or or running some kind of other tool, another language or whatnot because this can be anything as long as it is called config.sh it will execute it. And so you can have this do anything you'd like. This one's just super simple about setting it up to be multi user with the root user populated and setting the host name is local host here. And then cleaning out any extra unwanted locales that may have not been correctly marked as local data by RPM. And so let's just go ahead and build it. Right. So you can see it's running this through. And we want to see the log here. And you can see this is setting up the image there and it's all going through Kiwi here. You see that it was setting up the repos then it bootstrapped and installed the packages. After the package installation was done, it ran the scripts, the script helpers, and is creating the tar ball right now. So you can see it's doing that. It's an XZ compressed tar ball with multiple threads. So it's like using all the cores on the computer to actually produce the tar ball. And after that, it will. And while it's creating that I've already pre created this before so we're going to go ahead and boot it up in and spawn. So we'll just start that up. And now you see that it's got the it starts up like it would a regular computer. So this container was particularly configured to behave like. You know how you would start up a VM so it starts up system D and then starts log in D. So I can do route my super secure passwords. And go in here and there's nothing in here but I can see this and I see that and then Etsy you can see there's all these files. So let's catOS release. You see it sent us stream nine with all the stuff in here, you know, and then CDDNF. There's all the directories you'd expect here. But maybe I wanted it, you know, I wanted an editor to edit a files like, you know, let's let's edit Dnf. And oh, that's not there. Well, then what do I do. So we will go back and do that. So like just seeing here. But before we go back and do that, let's take a look at what it actually produced. So I showed you what it looks like. Let's let's take a look at what it actually what it actually produced temp output container. So the changes file is really just a concatenation of change logs for all the different packages that were included. So GCC you can see in here and you see open SSL, Python, all kinds of stuff in here. And then if you look at packages, this one's actually just a list of all the packages and this has the name group, the name of the version release architecture and the license data. And this is useful if you want to see whether things have changed between build to build, and also to see what licenses are included in the whole thing. And then the verified one is for checking the file system structure to see how much of it differs from what the RPM database knows. So you can see that yeah there was some files that were deleted because I purged extra locale data. And you can see that there's you know some modifications of some config files and stuff. And that's fine. So, but we want to have a text editor in there. So let's add that. So TXC build config XML. And we're just going to go ahead and add another package. So we'll add nano because nano is the best text editor ever. And we're going to go ahead and create this to build again. We're going to call it temp output two for this directory. We're going to run it again. And so this sets it up again and it goes through this process. And if we want to take a look at what this looks like, we'll take a look at the log here. It's already at the point of creating the image again. And once that's actually done, we will have something to show off. But meanwhile, while that is happening, we should go ahead and clean up or not actually, we can. Well, we do need to be root here because otherwise we can't actually write the terrible but probably want to be in and disc. Vc temp output two. So we're ready for that. And it's actually a it's done. So the that part is created. And we're going to go ahead and tar XVF container C bar live machines. EL nine appliance to. All right, let's make the directory bar live machines. EL nine appliance to. And so that extracts all the files for that because this is just a regular tar ball. So it only has those particular things. So now we will boot up the second appliance that we just created. So this is just like the first one except now it has nano installed. And now, if I go to at C, we go to DNF, and I go to DNF dot com. I have the nano text editor. Now let's let's check and see like what this difference is actually look like. All right, so we made the temp output and temp output two directories. And so let's take a look at diff of temp output. If you have container EL nine packages to temp output to container EL nine packages. And you can see we added just one package nano. Now let's see what that looks like for change logs. This is actually going to be pretty bad because change logs are kind of huge. But it'll kind of illustrate the point I was trying to make. So you see there's a whole new change log that was added that is literally just for nano. There was also some other sorting things and that's kind of the reason why I didn't want to show that real thing about that too hard. Let's take a look at verified. Let's see if there was any other differences. I expect that we might not have any see no differences. So from a file system structure, they look that there was no extra modification. So you can be relatively assured that the only actual change was adding nano to it. And so that's that's super cool and super helpful if you're doing continuous integration and continuous development of these sorts of things. So yeah. So images with Kiwi are sweet as a well documented well maintained project goes way beyond everything else like extremely simple to get started and the community is friendly and helpful for developing advanced setups and the wide range of platform support is unmatched by anything else I've seen so far. And it has a great mechanism for supporting reproducibly built images and the way and it lets you easily track how the artifacts are changing. This is really, really well done stuff and with friendly developers and a friendly community. You really can't go wrong. So here's some references, you know, for the Kiwi website Kiwi get up projects. They've got some sample descriptions that you can use to kind of take a look at how to use it for various, you know, distros and platforms and image types. And then I've got my own demos that I put up on on data is GitHub. There's the link there slides will be available after the talk anyway. And yeah, thank you for coming to my talk. And thanks.
How Datto uses KIWI to simplify building appliance images One of the more heavily underrated openSUSE projects is the [KIWI image builder](github). In the last few years, [Datto] has started using KIWI to replace the patchwork of custom image build tools to provide a consistent toolchain for producing various appliance images. This talk introduces the KIWI appliance image builder, outlines some of Datto's use-cases for KIWI, and how Datto uses KIWI to support those use-cases. This also includes a brief demo of building an image with KIWI.
10.5446/54670 (DOI)
Hello everyone, welcome to my presentation about building a language server for Sold State. Part of the OpenSuser Virtual Conference 2021. My name is Dan, I am a software developer, part of the developer engagement program at SUSE. I am essentially responsible for building tools for other developers, for example this one. This was just a hack with Project Blitzel. So besides my day job at SUSE, I am also part of the OpenSuser community, where I package maintain a few packages. I am also quite active in Fedora, where I have been recently elected into Fesco and I am also there part of the i3 special interest group, where we shipped the i3 spin for Fedora 34. I am also package maintainer Dan, contribute to a few upstream projects here and there. So but without further ado, let's take a look at today's outline. So first I'd like to cover what is actually the language server protocol, in case you have never heard about that. Then I also cover what is SoldStack, the why and when, so why did we do this, when did we do this, what were the circumstances around that, the architecture of this server. And then we'll have a brief demo, what it currently can do. And finally I'd like to showcase a few challenges that we faced and provide a brief outlook. And with that let's take a look at what is the language server protocol actually. So as you might have guessed, that's it's a protocol, actually a JSON RPC protocol and it addresses this old problem. So you have a whole ton of programming languages and you have a whole ton of different editors and they are all written in different programming languages, have different APIs if they have APIs at all. Now if you have programming language A and you want to give your users, so other developers, you want to give them code completion, documentation showing etc. Pp in five different editors, you have to write a plugin for five different editors and that's a whole lot of work and every single person implementing something like that for their programming language has to do that. And so there's a lot of duplication going on. The language server protocol tries to address this in the following way. So it defines a common protocol for all these kinds of stuff, for all these things that you want to have in an editor when working with programming languages like auto completion, diagnostics, documentation showing, code formatting for instance, jump to definition, jump to references, refactoring etc. And so the language server protocol defines how a language server which is some kind of backend program that analyzes your source code and the editor simply talks to this language server and says it hey I'm at this position in this file, what can I auto complete now or is this thing correct or what's the current symbol at this point, does it have a documentation, where is it used and so on. And so that's defined by the language server protocol. And the cool thing about this is you as a developer of a programming language, you only have to write this backend server and you can talk to all, you can provide all these nice cities to your users independently of the editor that they're using provided that it talks the language server protocol. On the other hand, if you're developing a new editor and you want your users to be able to have access to all these nice things, you just have to implement the language server protocol and you can talk to all these backend servers and you have access to a whole ecosystem. That's pretty great. So that's why we looked into it to improve the editor integration for Solstack. So what's Solstack? Solstack is a configuration management software. It's quite comparable to Ansible in the regard that it runs in agentless mode, but it also provides a server pool mode like you would know it from Puppet. It's I would say it's more closely related to Ansible since it's also written in Python. It also uses Jinger 2 and YAML for your main files. And so what's also notable about Solstack is the configuration management software behind Uuni, which is the upstream project of SUSE manager. So what you usually write in Solstack when using Solstack are so-called Solstates. And these are essentially files that describe the desired state of a system. So how does this look like? So that's a mix of Jinger 2 for templating and YAML files. And here you can see an example. So here, for instance, we describe something that should be a web server and that in this case it means we want a package to be installed, in this case Apache. And here we use Jinger templates to distinguish the different package names for different distributions. So for your Red Hat variants, the package is called Httpd and on your Debian and derivatives it's called Apache 2. Please keep in mind that in contrast to Ansible, the Jinger templating is applied before feeding it into YAML, which has the advantage that you can really leverage the full power of Jinger templates. In Ansible, you can only use them really in YAML strings, which has its upsides and its downsides. So why did we do this and when? So the why is we found editor support for configuration management to be rather lacking. So there is extensions for Solstack for various editors. Same goes for Ansible. But unfortunately, these are all tied to certain editors and they are not really too powerful. So usually they provide good completion. Excuse me. So they provide good completion. They provide use in certain places, diagnostics and stuff like that, but it's not really context where it's limited to one editor. And configuration management is really becoming increasingly complex and especially if you have really powerful tools like Solstack, Ansible, it's becoming more like writing code. But the tools that you have as someone writing this stuff, they are not on par with your powerful IDEs. And we wanted to take a look if we can really write a prototype of a language server that shows that it's possible to provide you with more. And so we, that's Cedric, Bosná and myself, we sat down during this year's Hack Week 20 in March and we developed a prototype of this. So and what did we do? So we developed a language server and we wrote it in Python. We chose Python A because we are both familiar with it and B because Solstack itself is written in Python and that means we have some easy interoperability with it. For the language server protocol itself, we leveraged the PyGLS library. So that allows us to actually to really focus on just providing the data and PyGLS then takes care of all the actual protocol talking itself. We don't have to take care of that. We then use PyYAML for the YAML parsing. Initially, we really used PyYAML's YAML parser then we briefly switched to RUAML since it provided more features then switched back again to PyYAML. And now we just use its scanner and have a custom state machine that Cedric implemented so that we can parse broken YAML, which is quite important in case the user is currently typing stuff. In the future, we'd also like to use the JINJA 2 for JINJA templating, but unfortunately we are not there yet. Really currently, tiny component is the front end. So essentially the VS code and the Emacs extension, but these are really, they don't do a whole ton. They just launch a long language server and tell the editor, hey, if you edit SaltStack files, talk to this language server and that's about it. And with that, let's take a look how this currently looks like, what it can do. So what I have in here is VS code. I have an example SaltStack file open. And so first let's take a look at what we can do with in terms of all code to completion. So one of the things that SaltStack has is these includes. You can tell Salt to include the definitions from other files in here. And what the language server will provide you is potential other files that you can include. So these are all the ones that this repository currently has. And so you can just, you can just bring a completion here and let it include whatever you like. So that's it. That's one thing. Another thing that it will complete is the submodule names where you can essentially, it will take, it will find out what's the current module that you have here before the dot and it will provide you with the completion of the correct submodules. So if you're familiar with this, you'll see it will only give you those that belong to the file module. If I, for instance, replace this with the Git submodule, it will only show you stuff of the Git submodule or for PKG and so on. Yeah. Good. So that's what's supported in terms of auto completion. Then what you can also see, I hope at least in the recording are the breadcrumbs in here. So these are also provided as these are so called document symbols in the backend. And you can use these to jump around. And essentially means that the language server is kind of, is aware of the structure of this document and jump around in that. You also saw some, some documentation is shown up here. So that could be also shown here in places. And the last thing that we, that's also supported is jump to definition. So what Solstack has is these so called requires. It means that this, if you want to, that this state requires this other one. And what you can do, you just go right mouse button, go to definition and it will jump to the correct place. It works from other places. Oh, here we have other ones. Okay. So here I require this jam jackal and I can jump in there. So that should be about it for the demo and a few challenges that we faced. So one of the, one of the challenging parts is if you are currently typing in your editor, then you don't really have valid YAML. So we need to be, we need to be able to parse broken YAML. That's also why we only use YAMLs, the pyaml scanner and have this custom state machine that Cedric implemented so that we can really do that stuff. Yeah, meaningful testing is as usual, as you might have guessed, pretty challenging, especially if you have different editors in place. And the most challenging part at this point is really the Ginger 2 interpretation. The really nasty part here is that the templating is applied before the YAML parsing, which means extracting stuff from files like this is, can be really tough because you can go really, really crazy with the Ginger templating. And so this is yet an unsolved problem. Let's take a brief look into the future. So unfortunately, this is really just a side project for us and we don't have a whole ton of time to invest into it. So unfortunately, progress has been a little bit slow. But the biggest thing that needs to be implemented in this is the, as I already said, the Ginger 2 parsing. And then there's a few other stuff and things that we could implement. For instance, show the documentation of various elements, integration with Saltlin to provide you with some linting of your document, auto completion in more places. And then as I showed you, you can jump to these requisite nodes where they are defined. It would be nice to be able to do it the other way around, so to jump to where they're referenced. Here you can find a few links. So to the source code, to the extension in the VS Code marketplace, a blog post summary of the Hack Week, the link to the slides are also on GitHub and the LSP specifications. The obligatory legal slides. And with that, I'd like to thank you for your attention. Have a nice day and bye.
The Language Server Protocol A language server is a piece of software that speaks a JSON RPC protocol (called the Language Server Protocol, abbreviated LSP) to provide text editors with code completion, diagnostics, documentation, etc. There are several editors and numerous language servers already implementing this protocol. The advantage of the LSP this is, that each language server works independently of the used text editor/IDE and thereby makes all implemented features available to a wider audience. Salt States SaltStack is a configuration management software like Ansible or Puppet which allows you to configure your machines via so-called salt states. Salt states are YAML documents with support for Jinja2 templates: mysql: pkg.installed: - name: mysql service.running: - name: mysql web_server: pkg.installed: {% if grains['os_family'] == 'RedHat' %} - name: httpd {% elif grains['os_family'] == 'Debian' %} - name: apache2 {% endif %} The Salt States Language Server During this year's hackweek #20 Cédric Bosdonnat and Dan Čermák built an initial prototype of a language server for salt states. It already supports rudimentary completion, go to definition, document symbols and it can show the documentation of salt modules. This talk will give a brief overview over the current state of the language server, how we got there and which challenges and surprises we encountered along the way.
10.5446/54677 (DOI)
Hello and welcome to presentation about integration testing with the Environment framework. My name is Andriy Nikitin. I am a member of the OBS team at SUSE and my team takes responsibility for infrastructure behind OpenBuild service. Okay I understand that integration testing is very wide and it's quite complex area. Then for me the biggest problem here is that there is no common way to communicate, to integrate different products or at least to demonstrate that different products can work together. Sometimes it's easy to do but sometimes very hard to do. Again, there is no common approach or that we can follow to understand, to easily understand each other. So what means cross-product here? It means that there are several products. Each product has maybe online cycle. It has own teams, it has own quality control and with the estimates it's good quality. But it's hard to prove that several products can communicate together or at least it's hard to script these scenarios. And usually there is no expert that can cover all involved products at the same level. So I think we are lacking at the moment tool that we can define how we script some complex scenario of cross-product communication. So when I speak about integration testing we of course speak about some test scenarios and test scenarios they usually can run on some product topology. This means that there is a list of products involved, each of them is some particular version, maybe it's distribution type, maybe it may be some fork, so maybe custom build, etc. Also it's a very complex topic, it's dependence management because for each product definition maybe we need some tweaks in how to satisfy dependencies. And of course there are test scenarios that basically define expected behavior or maybe they demonstrate some problem in this cross-product communication. Below is one example of product topology definition, for example we have web server, a private shell app server, a particular version, we have a database of a particular version and we have some project that we are currently working on, so we want to run some script scenarios on this topology and see how it comes. And now comes dream framework requirements, because I did spend quite a lot of time on integration testing or cross-product communication and I find that the test framework that runs tests it should not actually care about how dependencies are satisfied, this means it's different dimension because no matter how we satisfy dependencies we should be able to run some script and see whether a product work or not, maybe it's because of dependency maybe not, but we are still able to run scripts, so dependency management is optional, it should be done on different level of testing or maybe it's not the part of testing itself. Also the test framework should not enforce how we deliver the products, maybe we build them from source, maybe we edit some default packages, so maybe we get it from some particular distribution and again the test framework it should be able to use binaries to test artifacts from different kinds of distributions. Then the test framework it should not have privileged access to the system, because we can start database servers without privileged access, we can start web service without privileged access, we can most of tools they run without privileged access and it simplifies troubleshooting so much when all the services involved in test scenario they run under single user, it doesn't cover all the scenarios but at least it is possible to test 99% of use cases in single user environment and it helps so much that we don't care about permissions in these cross product communications and again the goal here is to show that at least in some scenario products can communicate properly and again if it covers 99% of all test cases then it's good framework. So the script test scenarios they may be flat shell commands, they should not bring extra complex dependencies because we should not spend time on troubleshooting dependencies that are required for testing framework because it's not the time we want to spend too. And again topology can be input parameter to test scenario so we can run the same test scenario or maybe on different version of database server or maybe database server is built from source and some patches are applied and we want to make sure that a new test scenario that it works well on different topologies for the problem is fixed in one topology or the problem is introduced or speed is better whatever. And this dream framework it will bring new level of communication if we can find a way to satisfy all these requirements for dream test framework then it will be very easy to describe again complex scenario in bug reports and tutorials, maybe an automated testing, maybe some proof of concept and improved cross team communication. And to be able to handle different versions of the same product or maybe different distribution type or maybe there are some forks we need to introduce new abstraction layer that will hide all details specific to these different versions. So meet environs. Environs is special folder with some script generated, executable scripts generated inside this command. In this example we have environs called mydb and it has start script and if we use such folder in our script we basically don't care what's inside that start command. We can look of course into it if we need all the details but if we write script scenario it's just mydb start and we know that we started some database server and how we started it and what exactly database server is there. It's not that important, it's input parameter so in every next run it can be different and we can all the carriers about to compare behavior of products of different versions or maybe different vendors. So again there's command start, start, stop, status, also we can execute SQL command and that database and then we can compare output and the same we can do not for database server, we can do it for web server and again it can be a patch in web server or maybe it's some engine mix or some different. All we care if we have wrappers generated for it, we can start that server, we can check status, we can query some resource inside that web server using cool command and then we can stop. So the idea is to have these folders with script generated for each topology that is defined as input parameter and moreover it is not one, only one instance that we can use there are some several slots that we can use so we can generate three database servers, they can be of the same topology, I mean the same version, the same distribution type or they may be completely different and again we can run test scenario and see outcome if it changes or not, maybe we will use them in some load balancer or maybe combine in cluster or maybe try replication and see if it works for particular input topology. For example replication between different versions or chain replication between three versions or replication between servers of different vendors. We have one script that covers them all and topology is input parameter to this scenario and the same as web server, we can start several web servers and use them independently or together maybe some load balancer, etc. And again one aspect that we use for this environment is distribution type, so either we use default packages that are installed on system like in the first comment or maybe we can generate environment that can work with product that is built from source code, then we specify location basically to source location of source of this product and this environment will be able to basically build this product and prepare it for use or maybe we can use some target distribution and want to compare behavior between version or maybe source code that we applied patches. So and each environment it will cover specifics that needed to handle this particular distribution type in this particular version. So environments not most like framework that we can use for any product, it's more an approach for using so if you have one product that has only variants and second product that has only variants we can either build script that shows interaction between these products and then put input parameters version or maybe build it from source code, etc. And then so this example is kind of useless but again it demonstrates I think powerful how powerful this approach is. So let's start randomly either Apache or engineering web server. We generate this is bash script so we generate the number between 10 and 5 and between 1 and 0 and 10 and then randomly either create Apache environment this is special code that is used in this repository or engineering environment and then we start that environment here we don't know maybe Apache or maybe engineering then we create some file in special folder that is predefined in this environment and then we query this file and basically I did this test is like and check that we can be queried we get string my test and we can do an infinite loop if we want or we can complicate this scenario as much as we want. About the variants I believe that either the script would be much more complex or it will introduce some abstractions that against hard to maintain and hard to keep in mind all the details that are going on. And one important aspect that we can use the same comments in script and we can use them from terminal when we like do some scenario manually when we start some services or try some comments we execute the same comments in command line and we use that completion to see these comments. Another real life example is OpenK it's very hard to get OpenK started and with environments it is like few comments then it relies that all dependencies are satisfied but again once you make sure that dependencies are satisfied and you see that this script doesn't work you have something to ask okay get these comments and for another person that tries to help for example it's more clear what's going on and then ask specific logs that they know or use it in this environment. Another example is MirrorCache this is quite complex scenario it starts for Apache instances and one MirrorCache instance MirrorCache is mirror redirector so it accepts requests and tries to redirect it based on location from request came in it uses some basically here it redirects that specific address to specific Apache server according to how mirrors were defined in your database. So this concludes my presentation my experience I hope you find it useful so happy testing and see you next time. Bye.
Shareable scripting cross-product scenarios Do you know difference between starting mariadb vs postgres server as a regular user? Or apache vs nginx? Or maybe want to know specifics of working example of starting rsync server? There is no difference and no specifics - just use generated start / status / stop scripts. And there is more: (spawn as many services as needed(\*), configure ssl for cluster, build services from source, ...) - without affecting your system(*). environs framework suggests a universal approach for managing various services by generating bash wrappers, with following benefits: - object oriented approach - it hides internal specifics of service handling and allows an easy way to review and tweak them; - brief scripts for complex scenarios in complex topologies, without privileged access to the system; - easy to compare behavior of verious topologies; - demonstrate "how-to" behavior, share reproducible cross-product scripted scenarios in bug reports or TDD / BDD; - no OS flavor limit - run everywhere where shell is(*); - no extra dependencies - run in cloud, container, VM, CI or local machine. The main goal behind environs framework it to cover early integration testing and provide a way to script and share cross-product behavioral scenarios without root access to the system. So far the framework includes scripting possibility for postgresql, apache, nginx, rsync , mariadb(*), openQA, MirrorBrain, MirrorCache, zypper.
10.5446/54680 (DOI)
Hi, my name is Marie Norden and I am Fedora's Community Action and Impact Coordinator. I have been involved with the Fedora project since 2013, starting with an Outreachy internship working on Fedora badges. I stayed involved with Fedora over that time helping out with badges in the design team and about a year and a half ago I stepped into a full-time role supporting the Fedora community as their F-cake with all kinds of things such as supporting initiatives such as this and administrative tasks, events, support, etc. etc. So next we have Mariana. Hi, my name is Mariana. I am Fedora's contributor. I started contributing to the Fedora project since 2016. Currently I work as a product owner for PageFillist, which is an open source email marketing solution. But I also enjoy contributing to open source in general, mostly by organizing local events and now with COVID, everything is going online and we are able to join you today, the open source conference. Hey, I am Shumanthro. I work as a part of Fedora Curatorate in Red Hat and over the course of the last five years that I am involved with Fedora, I have served multiple small roles including being a part of MindShare and being a part of the Council as a representative of the CIN. Over the last five years I have helped out with mentor projects like JitSoc and I have been continuously involved with user support groups where I go and answer questions for people who have up-to-date issues. So today we are going to talk a little bit about Fedora project to start with. So Fedora as most of you know, it's a Linux operating system known to the wider community for desktop as desktop or server. As at the core of the project, we are a very global diverse set of people. We build softwares mostly based out of very good open source licenses and we tend to keep it more innovative as going on the weekends. We have built a strong community around supporting package and development, writers, designers, sponsors, curators and Mariana is going to talk more about the project. Fedora is a user focused operating system meaning that the end user is primary focus and this is why we have a very rapid release cycle. It's not very common to have a release cycle of under six months for an operating system. So this is why there are several spins and different Fedora flavors which are meant for different purposes. For example, for developers, for designers or even neuro Fedora which is meant for scientific purposes. Also, we have other open source communities that their software is included within Fedora and they share their latest developments and latest changes in order to make the end user's life easier. Also when it comes to the core foundation where the operating system is built upon, friends, features, freedom and first, a very important point here is that the operating system doesn't come in English. It is important that the end user can switch to a different language and this is because we want to reach out to more and more users and hopefully more and more contributors. So the ask forum which is a place where end users can go and ask Fedora related questions is in several languages and people can ask and get responses in several languages from other users and contributors. So I'm going to talk about the history of the Fedora ambassador program. So first of all, what is the ambassador program? The ambassador program is a 15 plus year program focused on outreach to other communities, local user groups, technical universities, these types of places to educate people about Fedora and help them learn to use it and potentially even become Fedora contributors. So that's kind of the goal and mission of the ambassador program all along. And what are the join SIG advocates and comm ops teams? So we have a lot of teams in my chair and we ended up with these kind of branches of community outreach because we needed to fit in different needs as the community evolved. So the join SIG actually, they are a very front facing group that focuses on connecting new contributors to the community, helping them learn about the community and potentially finding a good fit for contributions or just a group of people with similar interests. Advocates are a group of folks that run small events, but they don't go quite through all of the same things that the ambassadors might have to learn or be educated about. And the comm ops is an internally focused team working on community operations. So originally their focus was kind of data analysis and looking at different ways to improve operations inside of Fedora. So probably wondering like, how did we get to the community outreach revamp? Why is this needed? Well, over time, different things happened that changed how the ambassador program was working. It basically didn't really grow in a sustainable way. There was some questions of how finances were being managed and who was going to manage them. And there were some restrictions around that. So based on the different changes that came into place, ambassadors weren't feeling great. They were not feeling as connected to the program. They still love Fedora and they are still kind of going and doing some of that outreach on their own. But the ambassador program itself kind of started dying off. So about a year ago, a little bit over a year ago. I wrote up a proposal based on some research I had done, a book I had read called Switch How to Change When Change is Hard about how we could take the ambassador program from kind of dwindling and kind of seen as a failing program to a success. So I wrote that proposal. I proposed it to the Mindshare Committee and after rounds of feedback with the community, everyone was on board and then we got started. Once our team was formed back in July 2020, we started planning our first steps. So the very first thing was actually to document our next steps and what we were going to work on in the upcoming months. Initially, we created a trial award hoping that we will have more people on board and trying to help us from the community. But eventually we realized that people were not very happy with it and we retired it. But we have a public hack and deep file where we keep notes on everything we work on and how we process everything we work on. Once the team was formed, it was the time to announce it to the community. And this is why we had a couple of calls, video calls with community members and the Fedora Council and some of these are also on YouTube from last year. We shared with the community everything that Marie mentioned earlier, why this team was formed and how and what is the end goal of this initiative. The very first thing that we worked on was the ambassadors group cleanup. This is why the team was formed and this is the first thing that we did. What we did was that we tried to figure out which ambassadors had not been active in the past six months from their FAS account activity. The FAS account is a global account that you have within the Fedora systems and you can log in everywhere with that account. We went back six months. So we did that in November. So in November and going back six months, we checked who has not been active through their account and we reached out to them. We let them know about the ramp. We told them that they were going to be moved to the emeritus group and if they wanted to continue, they are more than welcome to continue holding the ambassadors title or if they want to come back in the future, that is more than fine. The next thing after the ambassadors group cleanup was the community outreach survey. This is one of the things that we did that I am very proud of because we got very interesting results back. We created a survey where we prepared a list of questions we wanted to ask community members, both ambassadors and advocates. The results that we got, some of them were pretty surprising. For example, we found out that Fedora contributors love self-organizing and organizing events without asking for funding or support from the Fedora Mindshare Committee, which is the official committee where you go to ask funding or to make a proposal. This meant that there is a lot of Fedora activity out there that we were not aware of because people don't share it. This is a very important insight to work on in the future. The next thing was the Mindshare team interviews. Mindshare is a group of people within the Fedora community which tries to not control but have an eye on the different Fedora teams within the community. There are representatives from every community. We prepared a set of questions in order to find out what the Mindshare team members think about the community and their proposals and everything. The next thing was to become a Fedora objective. Fedora has objectives for each release cycle. Most of the times, these are objectives that are about a certain feature for the operating system. This time it was different. It was a community-related objective. We hope to have our objective done by the end of this year. So, the revamp. Our work continues in the upcoming months. Our work in progress project, I like to call all of these things sub-projects within the revamp, is the Royal Handbook. The Royal Handbook is meant to be a little bit of documentation on what each contributor slash community member can do within the community. But not as a job description. So if you're part of the marketing team, you do this, this and this. But what do you receive back? So we try to create documentation from the contributor's perspective other than just a list of tasks you can work on if you join a team. The next thing was helping the Fedora Council with some questions. An engagement survey was launched from the Fedora Council and it's still going on. It will be going on until the end of this month. In order again to find out some, a little bit more details on also the use of the operating system and what a Fedora user does with the software. So far we have presented our revamp on the FedCom CZ, also at the Fedora Release Party we had last month. And we will be speaking at the community central in a couple of days. And you're more than welcome to join us there as well. Another part of the revamp that we're lucky to have is an outreachy intern. So I have a background in graphic design as I mentioned before. So I am mentoring an internship through outreachy. And we have brought this person on actually starting at the end of May. And she's already been doing awesome work for us. And she's going to help us really try to cement some of the identity by helping us with branding and resources and strengthening those assets like the role handbooks that we're talking about. So she'll be with us. Her name is Daria Chowdhury through August, the end of August. So she'll be with us through Nest, helping us with work adventure for that. And she's also doing things like updating the logos, providing infographics to help understand the team structures and the team roles. We're making some swag for the team to be able to hand out and to also just use on their own computers in places where they want to look at them. We also have this really cool thing in Fedora called Cheekcubes. And we've had them around for quite a while, but they haven't been updated. So we're looking to update those and kind of modernize them and make them really the new Fedora logo and make them accessible to everybody to print at home. So we're also working on how to join Fedora printable handouts so that folks in the ambassador teams and outreach teams have these types of things that they can just use at home really to help empower that kind of self-organizing. And then also Team Splag. We hope to get something for the folks on the different teams to help them feel a bit more connected. Coming to the part of huge, right? So Mary and Mariana talked at length about what we have done. And now, when it comes to how we are looking to present this as a community give back to the upstream, which is we want to make Fedora's community Amphicrous model as one of those smart modern robust models for everybody to adopt, it's going to be more like an open ambassadors program model, which others can also take as an reference to build their ambassadors program. More importantly, this particular way of doing the work is to make sure that we sustain as we grow. So the whole idea would be to sustain our current contributors as well as grow, or rather provide capacity to grow through this program. In the past, we have suffered a lot with growth and we have severely suffered a lot about sustainability. So the whole idea was to bring content more than more reachable or onboarding guides more than reachable to everybody. And one of the ways we have been trying to build our approach is to have this role handbooks have been made in multiple languages or six to eight key languages. We want to also make sure that these are provided as a form of onboarding guides to all the members who want to join as a part of this. So that they can actually understand the change much better than they can perform in cognizant with the change that we are trying to bring. The whole thing that we have tried to focus upon with this revamp is fostering this identities of individuals and along the stretch. So one of those way that we want to build this identity office to make sure that we create the sense that everybody needs to work together. Marketing needs to design and design needs to work with translations and everybody needs to work as a single unit to make this revamp possible. The way we are trying to make this much, much easier for more and more teams to work with because we are having surveys as we go on to build up this awareness inside the community and that will help us with this long term strategy of involving more and more contributors as a part of the ongoing revamp as well as after the revamp. Revamp is over. We are going to get more of it. So, moving on, Marvi, you would have some insights about how this whole program can influence this task. So yeah, I want to share a couple of insights or kind of viewpoints that are around this revamp. So the number one being that we are applying something I call RISE to this initiative. RISE is something that I came up with to kind of evaluate and support the Fodor community and basically it is made up of rec emission incentive support empowerment. And really I feel like these four traits really need to be in place for a community to be happy and healthy. So as we have been doing this revamp, we have been putting the different sub projects we have been working on as Mariana said, you know, evaluating them with this RISE concept. Are they providing these things to the Fodor community and can we do these things better? We have also seen a wonderful shift in attitude from the community. As we mentioned, there were some issues and conflict with how things had evolved with the ambassador program and outreach in general. We are a huge community and people are very passionate about their identity as Fodorans and their identities as Fodor ambassadors. So there were a lot of emotions and tension that was around this topic. So when we first started, frankly, we saw some negativity and some folks kind of thinking that this wasn't something we could accomplish. As we continue to work on it over the last year, we have seen so much positivity. People are getting excited when we talk about it now. People are coming to our sessions at the release party and asking really good questions and helping us form this and continue to evolve the revamp and make it a success. So we are super excited to see that. Lastly, Fodor is a huge place and we are still trying to raise awareness about the work that we're doing and hopefully get people excited and ready to kind of start up when all of the documentation and all of the resources are in place. It's going to be a great happy day. So thanks everybody for coming to our presentation. This is a recorded video, so we are going to be alive in the chat right now to take any questions and we hope to give you an update again in the future. Thanks again.
The Fedora Project has been a diverse project since its advent. Fedora has been shipping Workstations, Servers, Cloud, and IoT operating systems as well as many more amazing things to engage developers, users, and innovators worldwide. In earlier years, Fedora outreach was primarily executed by a group of people referred to as Fedora Ambassadors. The Ambassador Program has had many success stories of community growth during its 15+ year history. However, as time moved on the program began to grow, but not scale and adapt. Different bodies of governance within Fedora had different ideas of how things should be run. With no scalability, participation in the program declined. This year, we see a pandemic sweeping across the globe and all events have gone virtual. There has been no better time to revamp the Fedora Ambassadors program, as well as the entirety of Fedora’s Community Outreach teams. The Fedora Action Impact Coordinator, Marie Nordin, created a team formulated of two co-leads, Mariana Balla and Sumantro Mukherjee, and a group of volunteers (Temporary Task Force (TTF)). This team will work to address the historical pain points, create a new vision for community outreach in 2020, and re-engage the various teams & the Fedora community. Attendees of this talk will learn about how we got here, how we came up with a proposal for change, and how it is being executed. We welcome anyone interested in Fedora, community, and outreach. Attendees can get insights into the Fedora Ambassador overhaul, learn how to get involved, and give constructive suggestions to help the Community Outreach Revamp succeed.
10.5446/54684 (DOI)
Good afternoon, evening or morning, if everyone. I'm Patrick Fitzgerald and my little presentation in the tiny little bit of code is all about zero configuration of whatever you want, not just files and printers. So what the hell is zero configuration? What's DNS? What's a Vahi? What's multicast DNS? And to get to that, we're going to have to go through a couple of steps. So if you know a lot about networking, you can get bored. If you don't know anything, hopefully it'll be helpful. And our use case of why we needed it, what we wanted to do with it and how it's actually working very well. There are some limitations and there's a bit of code we're going to go through. And we're going to do a very small demo because, well, that's what I promised. But actually having done it, it's not this, it's underwhelming because it just works. So this is me, Patrick Fitzgerald. So you have required magic. We specialize in large scale Linux deployments. Been a programmer since forever. Spent about eight years working in film and television before it all went digital. And a friend of mine once described me as being creative in all directions, but I'm not sure if he was insulting me or complimenting me. I'm not sure about that. Some of the things I've done in 2010, we built with a colleague, we built an open source cloud based on Suiza Linux. And it's still running in data centers in both Zurich and London and it's got some large financial institutions using a third party software package that basically the people we're hosting for. And I'm also a refugee because of Brexit because about six months ago I realized it was all turning quite sour and the relationship from Britain to the rest of the world is troubling, at the same least. So I've moved to Germany and enjoying it very much. And I'm survivor of many things, including a cardiac arrest that happened around this time two years ago. But that's something for a drink later. So, bonjour, as Apple like to say. It was actually, I think it was originally designed by Apple and when you connect your laptop you can see a printer and you can also see other devices that are on your same network segment. If you've got a Mac, which I don't, that I understand, that you turn it on and if you've got a time machine backup, it finds it automatically. And how does your computer know where your another device is on the network? And of course you plug in a Linux or Windows or Mac system and you can see other hosts on the same network. How does that little piece of magic work? Because it's not necessarily an obvious thing for it to be done. And if you're, you know, why do you care? Well, if you're not a developer or a product designer, you've probably got no interest in this so you should head to the bar anyway. But it's just a bit of interesting work that we had to do because we're doing a large scale implementation of some of Linux across thousands of desktops. So, the way this works is two technologies. One is called multicast DNS or MDNS and the DNS service discovery or DNS SD. That's been implemented in all the major desktop platforms for some time, way back in 2002 with the Mac. And Windows is taking its time and gradually implementing it. I can't imagine why it's taking so long for Windows to get everything working. It's very simple protocol. They probably had their own ideas as to what to do and how to do it like Microsoft tend to do. But the protocol itself is very simple and it works. Similar things have been developed to do similar things in different environments. For example, your computer will also, depending on how you've got it all set up, may discover your TV. Your TV might discover your storage device and offer to connect to it or to your music players. And this is, they're all different implementations of the same thing, which is I've got something to offer you. I've got a service I need to offer you. How can I tell you about it? So this is just a bit of a detail and a simple trip down the network lane. So a local area network is defined as, well, it's technically defined as a broadcast domain. But that's defined by a bunch of computers, network together with a router at one end. And everything that has to go off the network goes through that router or multiple routers in an enterprise situation. A LAN, and I know this because I was programming network code 20 years ago in Assembler, is different sorts of packets are sent across at local area network. And I'm simplifying here because this is all based on TCPRP. There are similar architecture that exists at layer two. The network layer that we're interested in is TCPRP. So it does broadcast. And the broadcast is essentially sending a packet to every machine on the network. Now, that if you're in a switched environment or in a hub, if you're in a hub, you're connected to a hub, not a network switch, which most people have switches now, 99% of I think network installations are switched. But if you're a hub, I send you a packet as a broadcast and every machine on the network gets that broadcast. Similarly, it works the same way in a switch. But the broadcast is something that the machine and the network card is told to listen to and listen for because it's something important. And it might be something similar, simple like I want to get, in fact, I can't tell you the circumstances where they use broadcasting, but it's something like when you join a computer, when you start a computer up, it might send a broadcast to get pixie information or something like that. Multicasting is similar to a broadcast. And it's sent to a certain address, but that certain address is then distributed across all the connected hosts that are turned on. Now, the reason why there are two different types of that is because a broadcast is almost always a non-routable. So your broadcast will get to the router or router. I'm stressed, so I can say it either way. And it stops at the router because there's a slide coming up. There we go. There we go. But typically it's non-routable because you don't want those kind of broadcasts to escape to other networks. Multicasting is also similar, is pretty well typically non-routable as well. And a unicast is when you send one packet from my machine to your machine and no one else knows about it. So in all cases, there has to be a service, a network service or an application sitting on the system that is receiving the information, receiving the packet. Otherwise, it is accepted by the network card and rejected at some point. So, yeah, broadcast packet is sent to all addresses. And the thing is about that, it consumes CPU. So even if there's nothing running on that machine that accepts that packet, the network interface card has to accept that packet and then pushes it up from the hardware into the software, through firmware into the software that the operating system is. And the operating system says, I've got nothing to do with this packet. I don't know what it is. Or it says, oh yeah, I've got something that will accept that. So broadcast is quite, well, back in the day when I was doing it, you could have a, you know, if everyone's broadcasting, it would physically slow down the system. It also, a lot of network protocols require a broadcast to be answered by another broadcast. And if you get into that kind of situation, it ends up in packet hell. So the same thing happens with multicast. And again, it goes up to the network stack to discover if something's working. Now, routable or readable broadcasts, I mean, what happens, why would you stop the router and why would you stop the multicast packet at the router? Because perhaps there are things, you know, you could send to everyone, you don't have to send it. You don't have to get a list of everyone. You send it once. And if you're paying attention to what I've been saying, then you'll end up with absolute chaos. Because if network routers and your routers to your, connected to your broadband networks could see every other computer on the internet, you know, no one will get anything done. Literally, it is like everyone's yelling at you, broadcasting information at you. And you're trying to listen. That's the worst thing is that every machine, every node on the network has to listen to what everyone else has got to say. And pretty soon, you know, you can't actually communicate with anything. You can go to the broadcast storm page on Wikipedia, it's a more known thing. And of course, so there are multiple standards. It's just a bit of a comic there from XKCD. It's, as I said before, there are multiple standards for multiple industries that have all had their own ideas as to what is, you know, they've got something they want to plug on, plug into a home network. How do we make that? Not a lot of it is properly proprietary and not open source. And, and, but gradually, they're coming, they're coming down into a, a common standard, which is actually called zero cost. Let's get some water. So what, so how do we get to, to this? We've got a Linux deployment tool called Snoop here, because we can't think of anything better for it to call it. And one of the use cases was rather than just doing a deployment of Linux onto a thousand or 10,000 workstations, if we're doing that, and we could inject a monitoring tool and return information about all sorts of things back to a centralized node. So it could be managed better. We could get displaced, warnings, all these kind of things that every other management tool has. But this is usually for edge devices and, and workstation devices. And, you know, we, we, we saw some other, some very interesting ways that we could use a, an agent in on the machine. So we, we started writing one. But one of the things is how does, so on every network that, that we install onto, you have to have a, could be a virtual machine or just a server node or sync node that then talks to the cloud service that we offer. Or if you're doing it locally at the local service. Instead of making everything hand editing, the, the, the location of the servers IP address or the sync servers IP address, we just figured there's got to be a better way because in a corporate environment, in an enterprise environment, we wanted to be, we wanted to slip in the, the software and the, the ability to, to do what we're trying to do under, not under the radar, but might as well be, you know, the harder it is for someone to, to deploy, deploy a tool, you know, in an enterprise environment, the less likely it is to be accepted. And that was, that was one of the things that was driving the development is how do we make everything as easy as possible. So instead of doing a deployment and then running, you know, a, another tool that will then write its own configuration file that points the, the, the, the node management software to, to the appropriate IP address, we just thought, well, maybe we should look at zero comp. And we're everything we've written is in Python. And we did some, we came across and I think there are multiple libraries, this one called zero comp. And it's super simple to, to use. So what is like multicast DNS and DNS service discovery. So multicast uses a multicast packet or several packets to query host names on the same subnet. Usually that is provided by a DNS service. And thinking, bearing in mind the whole thing that I just said before about enterprise and trying not to change anything, or as little as possible. If you're in a, I mean, with one of the major customers is a, is a bank. And they had, whenever we're talking to them, we had to get three different represents representatives from three different departments. So it was the hardware people. It was the, the, the authentication and active directory people and the client, the client manager. And if we needed to, the networking people as well, who are generally in charge of DNS, although if you're an active directory, but you're making changes to active directory, making changes to DNS, if everyone, if it was not everyone wants to do it, in fact, everyone wants to do as little as possible, to be honest. So we had to find a way to provide, to get the host name of every machine that is running our software, or at least the other way around to find, to find the host that every machine needed to talk to. They don't need to talk to each other. They just need to talk to our service that's sitting on their network. So we ask the question and the network tells us. And then we ask the question, service to, you know, what the service, DNSSD does, which is, it basically says, are you running this service? And can I use that service? And what port are you running at? And what's your IP address? Now, DNSSD works with regular DNS and multicast DNS. In all these scenarios, you'll have a DNS server. But for example, if you've got 8.8.8.8 as your DNS server, there is no way that you're going to be able to convince Google to tell your local network where your Sniffy Sync server is, for example. So we put a text file or can be built into the code. At the server end, we can say, we're doing this kind of, we're offering this service, this IP address, come and find it. DNS works the same way with, if you know anything about DNS, there are such things as SRV and text records. And they're things you send, you apply to your DNS server and say, this is the location of an active directory server, for example, or a configuration information, which is stored in a TXT record. And that's one of the things that you come across if you're doing what's it called, the certificate certbot configurations for let's encrypt. If you're doing a domain-wide, building a certificate for a domain-wide thing, so like star.requiredmagic.com, you have to put some text in the, you know, into your DNS record so that it can look at it and go, oh, okay, so that's the key that I need to build the certificate. So then, as I said, the micro-DNS, the multicast DNS and ASA network, who the host is, and then it has this conversation about what it's got, how do I connect to it. So we're rapidly, a lot faster than I expected, getting to the code. So how simple is the code? I mean, the server is, yeah, that's all you do in a Python program to say, this is what I'm offering. We can go through it. I'm not sure if you can see the pointer. I probably can't. And I can't see any comments, because my screen's too small. But you import the zero-conf service and you create an instance of it and then you say, this is what, now, there's a, we've got a settings file, which basically says this is the site UID, which is put into a settings file, and a label, which is a broker host. And the service info is magic-sync. And what follows that bit is specifically, specifically to every single MD&S entry or zero-conf entry has something like that. And that says that this is a TCP, it's an exchange, a transactional message or packet sending. And the identifier and the other thing there is the dot local. Dot local is one of the key things of that, because it's dot local is actually for your local area, your local area network and your home or any other things. People make mistakes when they think that defining it in their DNS settings in an enterprise environment is helpful. And it actually isn't that you shouldn't do that. So we look up some of the details. Now, what our client software, the client software and the server software is trying to send information backwards and forwards using, I've just had a mental block about the MQTT. So MQTT is a very lightweight message passing system, message queuing system. And that's how we send messages backwards and forwards from the client to the server and backwards and forwards from that. And that's the port number 1183, that's the default port. We could change that in the settings we wanted to. And the description is magic sync dot local. So then we have these two functions that are defined at the beginning of the program link loop in the server. One is init zero. And that's basically init, init ing the zero com service. So I've registered that. And if there's a problem, of course, you log that. And then at the end of the program loop when you're closing everything down, then obviously you'd be nice and unregistered the service from the system. So I'm not sure if anyone can see that. Does anyone have any questions? I think I can zoom in. But there are two pages to this code. And a lot of it is actually just comments. And it's basically super simple, get a list of devices, get a remove the service, start the service, count the number of servers that are offering the service because it might not be just one. And then you add the service to the return that what the service details are having been asked. And this magic listener class is how this is the as complex as it gets to start the service. And then this is where we call that magic listener class. And we say maximum number of servers is five and it's a five second timeout. And it just basically says start the listener and then come back if there's nothing or come back if there is something that's returned. And that's as complex as it seems to get. And then this is a there's a mixture of the stuff here that you probably you may have seen. And so the client, the client makes a request and it comes back as a JSON packet just saying this is the location for the service that you need to contact. This is the port that you need to talk to. Wait in priority if they're different or different services that I haven't actually got to the bottom of those yet because this is working perfectly when we first did it. So there's no point to go any further. And the site the the X35BCU is actually the host name. And the broker host is the is the IP address. So it talks it finds that all that information just from starting up with no configuration whatsoever. And it sets things that you would normally have to put into a text file. And then our code then writes the configuration configuration file. So it doesn't necessarily have to refer to it again by doing the service the the zero conf stuff, but it does anyway. And we could change this means also that we can change the location of the server. We can change the port number. We could put in multiple servers or multiple on multiple IP addresses or multiple ports. It's all just you put this kind of these couple of pages into your Python code is and you'll end up and you've got this the similar service sitting at the on the server side, you will then be able to do everything zero configuration like this and you'll get the information back. It's it's I love it. And that's actually the end of the presentation, but it's not because in theory, I go over here. I can give you a very brief demonstration of this. So we've got the this is the server and this side is the client. And unfortunately, due to a at of all out of all things to happen, a networking error or a network problem that I've got home, I have to connect to a client site to tell them to do this to demonstrate this this particular version of sniffing doesn't have any any. There's no logging. So it's almost pointless not showing people the log doesn't come come up. I can go there. So you won't actually see anything change at the server side. There'll probably some errors that pop up but they're irrelevant. There's nothing really. But what you're seeing there are things that are being sent from different. There's an error. But this is a client there actually. This is the sync server subscribing to the MQTT server is running on the same machine. So it's not not very dramatic. Not very interesting. But here on the client side, it's kind of where it gets gets cool. So restart that and then immediately call the log and you'll see there it's checking for zero services and it's found one because there's one and it's there at that and writes the and it's already sending this information to the server. As I said, there's no I don't think there's any indication that's receiving that maybe. Published stats I think is one of the things that goes up upstream. But it sent all that information immediately upon subscription to the service and it does this on a regular basis, which is on our system is his hand and it's configurable so I can do it and you'll see that they're running an old version of leap. So we have to fix that. And yeah, so that's kind of it in terms of the way it all works. And so that's kind of it. So I'm just going back to the. So I think that's kind of that's the that's the presentation. And I can answer some questions at the in the in the chat room. So I'll see you there.
Hand editing config files for local deployment? Say hi to Avahi. **Avahi: free configuration for your network service.** _(Live demo included) If you've wondered how your Desktop Linux machine "discovers" items on your network, such as printers and file shares, this session will explain Avahi: the network service advertises resources across a LAN. The open source version of Apple's Bonjour/Zeroconf, is a very flexible way to enable discovery of services. We'll discuss how we used it in our deployment tooling , and we will demonstrate how to craft a configuration to discover custom resources for consumption by client software - enabling a true zero touch service installation. Basic networking and python knowledge advantageous, but not essential.
10.5446/54686 (DOI)
Afternoon, everyone. Patrick Fitzgerald is my name. I've just finished one presentation. I've been thrown into another and I guess you're all waiting to get to the bar, even if bar is a socially distanced kitchen or something like that. I'm here to talk about Firebird, which is something I discovered many years ago. It's something that I discovered, just as you do, just wandering around your local hard disk and discover it's installed on this machine that should be running Linux in a very controlled environment. What's it doing there? Well, it turns out, it says it's probably already installed. Actually, I'm running tumbleweed and it's definitely not installed. I called Doug and said I've got a problem. It's been removed from the repo but it's still in leak. It might have already been installed. There might be a LibreOffice update. Removing it, I'm not sure, but it's a great database. But what can you do with it? What do you care? Yeah. Very briefly, we're going to go into what a database is. What's in a base, which is actually this predecessor, which is not predecessor so much as the original source code that was open sourced. Why is it so good? Why we use it? Or why we did use it? Why it's a good tool to have in the toolbox? So I'm Patrick Fisteros. I've been a programmer for spent eight years working in film and television. In the meantime, whilst I was doing that, because there's lots of spare time, if you're working in television and film, in Australia at least, which is where I'm from. I did a whole lot of programming and a friend of mine said I was creative in all directions. I'm not sure what that means. If it was an insult or a compliment, I'm still trying to work that out. But I've been into Linux for at least 20 years now and built an open source cloud. In 2010, I'm still running in a number of data centers in London and Zurich for large financial customers. I'll get short of how I've told you who they were. I've been in London for a long time. I've been now a Brexit refugee and currently residing in Germany and loving it. I'm a survivor of many things, including a cardiac arrest two years ago at this very conference. Enough about me. What do you think about me? I'm joking. Let's just go back to that slide. So what's in to base? Now, I came across it when I was running a company called Oceanwear Digital, which is a client service company, so they're working on people's systems. We had to write a system to track engineer work time. At that point, I knew a lot about Python, sorry, Pascal, and object Pascal. So Delphi was the tool of choice and it came with something called Indibase. I looked up and read up about it and it seemed to be, well, it's got a long history. It's solid. It's dependable. It's got all the things you want in an enterprise database. It's got a three-meg binary and the download is eight meg. It's almost too good to be true. It's also true multi-architecture. It runs on everything, including Android. And of course, the big thing was it was free with Delphi Enterprise. So that's what we went for. It's got a surprising history. It was one of the early embedded databases and theory years or the room readers that was originally developed for targeting systems of the M1 tank. Whether or not that's true, but one of the things that they had to assure the Department of Defense of is that regardless of what happened, if the system went down, it would come straight back up with no corruption. And based on that initial requirement is kind of where a whole lot of the different design decisions were made and so forth and therefore it's highly reliable as compact as well. The database sits in a single file and is accessed by a connection string, which you'll see shortly. The higher reliability is interesting because it meant that, I mean, it's not that we certainly weren't building systems for the M1 tank or any tank for that matter, but we were doing early experiments with iSCSI and other network things and NFS and sharing stuff and doing all sorts of interesting things, especially when we started building the cloud environment. And we found that if there was some sort of interruption to the network flow, for example, for the virtual machine that's been shared across an NFS share, things like MySQL, MySQL was pretty bad. It could not, it would have to do a massive recovery on it. On the same infrastructure, the same system running on a Windows system with Firebase, so with Firebird as the database system we never and still never have, with hunch wood, had a corruption of the database and it was being used very, very actively for our system. So what's Firebird? Well, this is what they call the year 2000 incident, which was they were going to take, they were planning to take it public and they were going to open source the code and it was all, Linux was just coming on stream and it was becoming more and more popular. Lots of internet companies were being funded and then suddenly the bubble burst and a whole lot of companies lost a lot of money. And of course year 2000 was happening right then, which was something that a lot of money had been poured into companies to make sure that the year 2000, the year 2000, their systems would not, for example, end up resetting back to zero because they've got a two digit date filled for when it comes to the year. So, and that was a massive problem and what's funny is when people talk about it, there's lots of people who don't know that that was a big issue and it was. So they were, so the, the company, in prizes company, which is owned by Borland, I think, and they're going to take it public and they briefly open sourced it before all this stuff in year 2000 happened, which meant that they just realized it wasn't a good time to open, sort of, to go public. So they pulled the floating of the business and they realized the open source code was out there already on the internet. So they closed it, they closed it off, but someone had already grabbed it with the look, with the open source license and they started building firebird from the original source. And a lot of the original developers, I think, from, from interbase, interbase is the corporation that built it, they joined and they're still part of the, the team that's making it. So, so what is it good at? Anything that doesn't need, this memory restricted, anything, any system that you need to have a maintenance free database or data store. So that could be data system, data collection systems running in Raspberry Pi's or smallest, smaller devices that are unattended and sitting in someone's booth or something like that, just getting, getting information and then pumping up to the internet. Multi-user systems that just post-gres or my, my SQL or MSQL is complete overkill. Multi-user systems is, is not really where SQL like shines. A lot of people, I'm sure that I'm making a lot of people very irritated by saying that, but it's not really, it, when you write, when it writes to the database file, it blocks, does a block. So you can't actually have multiple records been written at the same time by different servers, by different service or processes. And because of the way it writes the file, if you're in a situation where you've got unpredictable power or unpredictable, you know, system or something like that, it's, like I said, we've had, we've worked, we've used this system for almost 20 years, with various infrastructural environments with various problems and we've never had a corruption. Very good for something like a kiosk, which is probably better, better suited by SQL, for SQL life. But it also means if you're doing something in, in, in, in fiber, you can scale. You can scale it, you can have multiple connections, multiple client systems talking to the same database and therefore establish the social, so-called single source of the truth. So that's, so when, so that's, that's, you know, that's a big thing for, for us, not having multiple, multiple databases saying multiple, multiple different things about customers or, or their systems or, we put it all in one database and that, and everything links into that. And of course it embedded an image devices size does matter. And unfortunately, I can't scale this slide properly, but it's not because of my lack of ability, but it's because the different size between, between SQL light, Firebird and MySQL or Maria DB is so dramatic that it's, you know, it's unbelievable. I was actually completely surprised when I was doing the preparation for this slide because I didn't realize A, Firebird was still that small and that B, MySQL was that large. And I will do a bit of a brief demonstration on Django and even the the RPM for Django is about 10 meg. So if you're doing something and building something that will talk to, to, you know, lightweight systems, or when you're client, the client access system, if it's something like Django, you're talking about 10 meg download and the database is going to be eight, well, a lot, a lot more than that. What's that? 80 times larger, which is kind of silly. And the thing is about code. The more lines of code there are, there's a greater opportunity for failure or create greater opportunity for errors. And one of the best things about Firebird is it's so, I mean, it's version four has just been released. And they keep on gradually introducing different features. But the code is rock solid. And you're not going to have some sort of failure because of someone has just introduced something to the code. So our use case was, and still is, our ILA as job tracking system. It was originally started with the previous company, which was called Oceanwear Digital. As I said, designed around 1998, and Delphine Interface were used. We did two clients and one of them was a web based system for the client, for the engineers to use when they're on client sites. Because of course, back then, being showing my age, but back then, it was all the rage for companies to start installing the internet and getting access and giving people the ability to browse the very few websites that were available. But even then it was booming. So, and the basis of that was we would have engineers would be online anyway. So we decided instead of making it a thick client or a fat client that would be installed on machines on each client site, we just thought we'd make it a web page. And that's what we did. There's one other client, there was a couple of other clients that developed as well. But one of the most important one was a desktop windows client written in Delphine as well, that injected would take all the invoices and all the time that's tracked with word tracking time on a much like a legal company does. So you start talking to your lawyer, he's going to click a click a button or before the start of stopwatch, you know, take the time and charge it by six minute increments. We thought that was a good idea. Customs didn't necessarily believe. And so we don't get injected into the system via the web page, and then we do all sorts of things, operations on that data, depending on, on how many, what client had what custom or what the contract, what the discount level was, it all gets sorted out at the end of every work job, the work requested, we call it as what we called it. And, you know, come up to the end of the end of the month and come up with an accurate an accurate invoice. But we kept on adding more and more stuff to it. But it became unmanageable. That's kind of just there just the database, the data segments of the, of the code, there's not, there's nothing, there are no modules and no models on that that refer to actual procedural work. They're just the database queries calling different, different functions inside the database. And it's just, it became impossible to manage. And it just got too complex. And that's just a small segment of the, of the schema of the database. And we just had to come up with something that was more manageable than that single customer, the single CGI program. Because that, and also the CGI program is getting a bit, is the gigaev it dated and we're moving to Linux and we wanted to have all the flexibility in Linux on the web server and get away from IIS. So, you know, but the questions that we sort of really had to ask is do we have the time and budget to build a whole new database? And the answer to that is no, as well as the migration of the data and all the other things that go because database is not just about data. Nothing I might be still coming up to that slide actually. You know, what do we have? Can we migrate all that stuff? Well, of course we can. But really, it's time. Can we, can we do it? And can we do it without interrupting the monthly accounting and billing process? And of course, you know, probably not. And do you want to take the risk as a small business? And I certainly didn't because I was become CEO of that of the, of I-Ler. So, we figured there might be a way instead of migrating everything to a different database, maybe we can use the existing database and do a different front-end. So, I came up with something called, well, we didn't come up with it. A lot of other talented people came up with Django, which is a, a, a Python based web framework, which I'm sure a lot of people have heard of. A lot of people may not have heard of something called InspectDB. And I'll show you shortly what, what happens with, when you run that. And yeah, so that's the, so that's the, you know, so we had, this is the new version, which we renamed WAVES, WAVESuite in honor of the previous company, OceanWare Digital. And we built on Django, we retained the, the Firebird database back-end. We didn't really make very many changes. Active, we eventually built an active directory authentication. So then we didn't have to deal with multiple usernays in multiple systems. And that was, that was, that was a real bonus. And then we were able to link it because of the flexibility of all the Python applications and libraries that are available. We started linking into different things like PBX and Java and building presence in. So in the web page, you see here, you actually can't see the presence stuff, but down the left-hand side, you'll see team status and, and all sorts of things we're using. IBM director is a monitoring tool. So we did this with just, and it opened up our ability to do it. The other thing, once again, is it retained a single source of truth, which is very important in data management. So I did that. And then how do you use it? Well, as I said earlier, the problem with doing this presentation was that I was halfway through preparing it. And I realized that my local system doesn't happen anymore because I'd switched to, to tumbleweed from Leap. And I'm not sure what's going on. I think, I think Firebird might need some help to package, package their binaries into, into, into tumbleweed. So anyone wants to stick their hand up and do that would be fantastic. So if what we can do is we'll show you here how we take the default database that is called employees as a demo database and turn it into a web, a web, webified system using Django, which of course, you know, it then approves how, how this works. And hopefully, fingers crossed, it's a live demo. We'll, we'll see something. We will see shortly. Right. So let's see what So I just have to get all the other systems out of the way. There's all sorts of Here we go. So there's all sorts of systems that you can, you can use to, to interrogate a Firebird database. And one of them is, is the WaveSuite system itself. And we're using Flame Robin, which is one of the things is this is a open source stuff, open source system. There's also some problems is connecting. Let's try something else. There's, there's, there are commercial packages. There's, there's loads and loads of third party customer, customer based things and admin based tools. Let's see if this works. No, okay. I can't show you that, but we can show you the original enterprise sample database. I'm not sure if we can make that larger. Make it larger and make it useful. No. So this is, there was a missing slide, which was like, what a database is and what, what a database is this, that a lot of people don't see and don't realise because there's lots of complexity that's hidden by the, the fantastic front, front ends that you get. But you can do things like you can define except exceptions. So you can say if something is done, then run this code whatever it is. And can we, just close that functions, you can have functions in there, you can have generators, which are basically things that generate a unique ID. For example, procedures. So all these can get called and actually can return their results as part of the, the query that you make. So they get, they get mixed into the tables. There's a bunch of system tables that keep track of things. Sages and these are the 10 tables that this sample database has. And these are all linked together via various joined, joins and what they call primary keys and and they're linked via these key characters. Okay. Let's see. Let's see if we can look at that. It's probably not, I can't make it any larger. But you get the idea. It's a lot of complexity that's built into any kind of database. And I guess the point of this is that it's, you can get all the information out in a consistent and reliable way. And that's, that's one of the great things about using a database that it does. You can, you can rely on the fact that once it gets committed, the data gets committed, you can get it back in the same way and then link it into different ways, to different information and see the information in a completely different light. So, yeah. So that's that all there. I've already installed, I've already installed Django, already installed the database driver. And there's something called, which I'll just run that program. Let's see. There's a whole lot of errors there. Let's try and get another one. Okay. So there is something called inspectdb. Now, the thing is that the, the, there are two, I'll do something out of order here. So inspectdb is something that literally having given it, so this is the Django settings file. Django settings file, talk, you give it the database name, the path, which is the connection string, the host name, which is also part of the connection string. You'll see that down there, the username, host, and where the, where the database lives. Now, you put that into Django and you fire it up and it blows up if you're trying to do anything with it because it doesn't know the database. All it's done, all you've done is given it a, a username and a password and an address. And Django goes away and goes, I don't understand any of this. And besides, I don't even, you're not fulfilling my requirements of having my set of user tables and, and the stuff that I need in that database. So to do that, there's a simple command called migrate. And once you've migrated, it creates all these different tables and different, different sessions and table, tables and things like that. And then you end up with this version as opposed to this version. So it's made some changes. Most, the most, most obvious one is in the tables. So here we've got 10 tables, 10 tables. But after you run the migrate, those 10 tables turn into 20 tables. And those 20 tables have got authentication, authentication things and, and permissioning and, and different group things. Django login log service, something called migrations where you can actually migrate and you can add changes to tables and migrate them. And then they become part of the, the, the whole database, etc. So it adds another 10 tables. So having done that and having seen that, how do we get, how do we, so I've created, you know, I've done migrations, created those 10 tables extra, but it still doesn't know about anything else. So the way to find out, give it the information as to what that is, is, and this is also so you can use it within your system and make modifications to it. Let's see. So I expect DB, person, the database and comes back with a whole lot of garbage, but it's actually not garbage. It's the Django, it's the Django and the Python models for every single table is discovered. And you can put that into your, into your database as a model.py. And you can, you can modify it. You can do things like give, give different, have different things show up instead of it being proj debt budget, for example, there. You can turn it into something that's like project department budget and things like that. So it displays properly on the screen. And having, I don't even think you need to do that actually, but having, just run through the run server command. Hopefully this isn't going to blow up. How do we get a little bit of a warning that something is not quite right? Before you, if you did a, before you render migrate, it'll come up with an error saying there's, there's a whole bunch of things that need to be, you need to add in order to, to be able to connect. And now, we get to that. So we now got a system that is running off that database we just shown. So let's get that right. Great. Okay. This is where we're at. Okay. There's probably something back in that I need to, I didn't do because I was, this was working on this, your trend tool about an hour and a half ago, but you will get, if you've ever had Django, used Django, we shouldn't work. All right. But essentially, it, you'll get something that looks very much like interesting No. And unfortunately the demo has only gone so far. It's... No, I don't have network connectivity to that site. But you get... you come up... Yeah, that's what you get. So you'll come up with an admin field and you can change any data and anything. And that's the... we get started with Django is using the admin tool. And then you start writing your code. But in terms of doing something lightweight, I mean Django is lightweight enough, even though it doesn't seem like it sometimes. And it makes a perfect match in certain circumstances with Firebird. It's compact, it's lightweight, it's solid as a rock. It just keeps on running where other things don't... And you don't need to look after it. There's nothing to look after. It just sits there and runs. You don't have to do anything to make sure things that were working, continue working and all that kind of stuff, but just sits there and in the background it does what it says it's going to do. And that's kind of it. And that's how we ended up making that wave sweep. And so just in summary, Firebird integration with Liberia Office, well it was a good thing. I'm not sure what the future of it is because they had some problems with the client side code, which probably if anyone's interested, they should take a look at doing that helping out there. So it may or may not be already on your machine if you've got Liberia Office and you have 15.2. I'm not sure about 15.3 in Philippe. It might still be there. But the most important thing really is it's fast, it's fully featured. It's a very small footprint. It handles multi-users very well. And it's very lightweight on memory as well. And I guess the key takeaway is if someone is saying to you or something, you're deciding that you need a database to do something, someone's telling you, I want to do this, take a look at the use case, take a look at what really is needed, take a look at the size of the IT staff and see, do they have a DV admin person there to do the work, which you'd kind of do need if you're running anything larger than SQLite or Firebird. And so keep an open mind. The user choice could be the wrong choice for the use case. And I'll upload all this code up to that link. And there's a couple of pages there for, you know, to find out more information. And I think I'm kind of done. Thank you.
Installed with Libre Office, Firebird is a vastly capable RDMS. Following on from the brief introduction at the LibreOffice Summit, I will re-introduce Firebird, the open source version of Interbase, the original embedded systems database. Firebird is a high performance, small footprint database with a long (and interesting) history. Firebird features Stored Procedures, Transactions, Encryption, multi user access, and is SQL-92 compliant, and can handle databases as large as _20 terabytes._ Because of its small size, efficiency multiplatorm nature, Firebird is ideally suited to IOT and edge device deployments, at any tier of system architecture. I will go through the steps required to prepare new database for production and multi-user operations, using a simple Django deployment as an example.
10.5446/54687 (DOI)
OK, starting now. Hello, guys. Welcome to my talk. It's about getting changes into open to the leap, 15.3 and newer, and to Slee, which we can. And let me tell you something about myself. I am Luboszko Cotsman. I work as a release manager for open to the leap. Let me maybe screen share. And I basically started Linux since people were talking about it. With Manrak Linux 9.1, it was attached to one of the magazines that I was reading. Otherwise, I used to be a release engineer for L6 and 7 in the past. And I was also shortly involved in the beginnings of Fedora modularity. And I was involved in the Czech open-storey user group back then when I was kind of into Slurries. So let's check the agenda for today. You may have noticed, but leap 15.3 is relatively complex. There's a lot of projects that kind of maybe and leap. And the goal here is to basically guide you for these projects, tell you what they are used for, how to contribute to them. So we are on the same page. And by the way, most of the information that I'll be talking about can be found on the open to the packaging for a leap wiki page. So if something's incorrect there, or if you like to explain it also, feel free to contribute to the page. Any help is appreciated. So let's talk about how we distribute both nowadays. So the line on the top is tumbleweed or factory if you want to. And for us, the important part is the bottom, right? Here, when we are talking about leap 15.3. So one of the packages roughly is binary inherited from Susie Slee 15 from all service packs. So just seeing 15.3.3 is kind of incorrect here because it's sp0, sp0, update, sp1, sp1, sp2, sp2, sp3. It makes the Sleek.co stream plus package hub that's roughly two threads. And the last little dot on the bottom, if you can see my cursor, is the leap branding, kivi files, anything related to product builds, or packages that we would like to explicitly for from Sleek because we want to keep maybe different behavior. There are a few. And they are not here also in package hub, but let's talk about it a little bit later. The general idea is that Sleek and package hub makes new leap. And since we were building package hub from leap 15.3.3, we always copy the packages. You're thinking that just continuing in this way actually makes sense. So that's why we are reusing that way. We are looking for ways to simplify it and improve it, make it a little bit faster because it's not ideal the way how we build right now. And we want to make it a little bit better in 15.4. So the actual structure, I've mentioned the three projects. So we can see them on the top. OpenSuselip 15.3, roughly 166 packages. What's different from the 15.2 is that we are actually now building all of the arches with the exception of Arm V7 in there. So we have S390 there, and then there is the non-free project, which has the non-free parts that would be Discord, Steam, and others. It's roughly 30 packages, or exactly 30 packages. Sorry. Then the backwards, that's almost 10,000 packages. This is basically the place for most of the leap development happens. So if you actually submit submission and leap, it will be redirected. I'll talk about the redirection later. Then it will most likely end up here, as long as it's a leap exclusive package. If it's sleep package, it will end up in sleep. Then we get to Sleek. So I'm actually using Suselip 15 SP Star because the redirection covers SP0 update. Sorry. 15 update, 15, 15 update, because we don't use SP0 in the project name, SP1, SP1 update, and so on. And that's roughly 5,000 packages if you're the duplicating individual updates to individual streams. And then what's interesting, and maybe you are not aware of that, is that we have now OpenSuselip step, which is actually used for fullerboot of OpenSuselip. Sorry, Suselip is an ex-enterprise because since we inherit binaries from Sleek, and Sleek doesn't have Rmv7, we have to find a way how to enable Rmv7 builds. So the way how we approach it is by rebuilding each layer of Sleek and actually adding Rmv7 support there. And so in the end, it's the duplicate of about 5,000 packages. But when I was checking the actual number, including all updates, it was more like 17,000 in each individual service pack. So it's quite large. And then the last project is under the OpenSuselip step namespace, and it's the front runner where we actually rebuild everything in a single layer, not in individual service packs. And this is where we are actually trying to develop the Rmv7. And we have also option to, we call it, in development versions, or some suggested fixes for Sleek for next service packs. So there is a little bit of flexibility where we can take some changes ahead. Because this bootstrap and rebuild takes quite a lot of time, and this disabilification really makes it shorter. And then if the change is considered that it may be dangerous at this point, we need more testing. Or it's not yet ready, or the currently development service pack is maybe in RC, and we cannot take the change in there. We can actually use this feature to enable the builds. And maybe we will use it for risk 5 enablement in the future, because we are in the point where this is already working. So this is roughly the structure I'm skipping images containers, I know about it. But let's talk about package submissions. And this is probably where you would contribute. If you would try to contribute to Rmv7 development, it would be one of the two projects on the bottom, most like the D Frontrunner. So details about the projects. OpenSuzali 15.3, I mentioned that there is only a few 160 packages or so. And it's the top level project, which contains most of the product configuration and branding and some product packages from Slee. We are using it for building DVDs. So it's sort of like umbrella project, which takes all artifacts from Slee, from back ports, and then makes deliverables. And there is one interesting part about the project. And it has the submit request mirroring. So if you are contributing to existing package, not the new one, and you submit something against OpenSuzali, just like in the example, OSE-SR opens Slee 15.3 GA in your package, it will actually detect where is the package coming from the origin of the package. And the submission will be redirected there. So you don't really have to think about where it is. But there are certain cases where if we are rebuilding the package, if we have it forked, you really want to send a submit request to the true origin, which could be Slee in case that we forked it. So now let's look at the largest one, which is backports, basically the new leap. So the nice part compared to 15.2 is now that we built package packages. This is the module on Slee side, the publishable part for Slee. And for leap, we built it in one project not twice like we used to before. And we were considering whether we should actually have these two layers or not. And even now, it's a question in 15.4, because the additional layer of being able to maybe put some extra branding on top of the backports and therefore have two different build configurations for package up and leap gives us a little bit of flexibility. There are some downsides as well, a little bit increased complexity. And as I recently heard, also the scheduler priority is different if you finish like FTP3 build in foreign project versus the same project, it's also a little bit of delay. And how to contribute to this new leap? So basically, I told you if it's an existing package, you just contribute to open Slee 15.3 GA. And if it's a new one, you have to be really explicit. But you can use backports to make sure, like in any case. But I really like the redirection, so I'm always advising to use it because it's a Slee package, like it will do the Slee submission for you. And for new packages, please use the actual destination. For whatever reason, you need to add new package to Slee, for example, because we are updating some software there which then has a new dependency. Specific Slee, I will show you on the next slide how to do that. If you are really putting something new to leap, then you really want backports. If you are adding some branding, then maybe top level leap project. And right now, the behavior is that if you would submit to 15.4 GA, which doesn't exist yet in OBS, but if it would actually land in the top level project, which I feel like is incorrect. So we may actually set default to backports, because this will be the case for most of the packages. So let's have a look at these Slee. Star, as I mentioned, it's basically layered structure. You don't see all these service packs here only. This is the predefined graphics from the slides. So it only fitted these four. But everything was once released. This was a Slee 15 GA. And then if there was an update to it, the package actually got updated through this Slee 15 update. So then it was Slee 15 GA plus updates. Then Slee 15, as people may be very basic gnome or something, then the package got forked in that project. So you can see how it's glued together through the inheritance. And there are some nice things about it, because we actually are supporting code streams in parallel, like DSP1 LTS, as I believe ended recently. And therefore, we are trying to minimize the amount of code streams that are supported by our maintenance team, which is a nice idea. But that makes it a little bit tricky when you are trying to contribute to Slee, because you need to understand where we want to place it, or you have to ask somebody to help you. Where should we put the package usually? I think this is on the next slide, actually. Usually, if you are thinking where to place it, the answer is if it's rebase, the country develops service packs of SP4 nowadays. If it's a library that we try to maintain across all code streams, then the answer is the latest supported service pack. So how to submit there? Again, if the package is not forked in LEAP, which will be, in the most cases, for Slee packages, we are trying to use the binaries from Slee. In as many cases as we can. You would just do submit request against LEAP 15.4, and then just name of the package, for example, Pesh. Pesh is coming from Slee. And this will actually directly submit the request to SUSE Slee SP4 SP3 update nowadays, because the latest batch was released in SP3 GA. And then the triggers review for SUSE Slee reviewers. And somebody has to manually run OSC Jump review, which is a plugin to OSC. That release team has. And it will actually show the currently open requests. And if it looks good, if it has issues or friends, I will go about details like what you need to have in that submit request to Slee. Then it will trigger the mirror. So the submission will be cloned on the internal instance. And then we have some sync of statuses and comments in place. That will basically make sure that the package, if it passes the workflow, you should know about it on the OBS site. There may be some bugs at the moment, but this is the idea. And then it's basically like any other submission which is created by SUSE Engineer. It will be triggered under the name of person who run the review. So in most cases, it will be me. But yeah. So about the Jump review, how it looks like. So this is the example output, for example, for Pesh. It's basically telling me about the change. And then I'm just looking if it has the issue reference, which we really have to have for every single Slee maintenance request. You need to have a bug reference. If you are doing version update, you will have to have some features referenced. But then if it has all of this, then we can accept it. And it just creates a new SR with new ID, but in different OBS instance, the internal one, which we refer to as IDS. Right now, this is fully manual just to make clear. So if I don't run it for two days, you have to wait for it. Maybe we should figure out some nice way how this doesn't depend on a Vodrelubo classification or not. But we can talk about it on release team meetings. So closer look at Slee. What's important when you are doing submission to Slee? That's the deadlines. And we try to actually have the same deadlines in Leap. So our alpha should be really related to that alpha, especially with deadlines. It should be exactly the same. So it feels natural to contribute to it. Because Slee may be a little bit more strict about when the new features should come in Leap. It's generally after beta, you should already delete packages. So deletions should be before. But if it's communicated properly, we take changes up until Rc. And then it should be really just backfixes to stabilize it as much as we can. On Slee, it's a little bit more strict. And we have to respect that. It was always the case. It's not that this is something new. And if it's a maintenance request or a late feature request, something that you would open around beta time or even later, which is happening all the time, there is something called ECO. And it may sound scary. We have it on Viki. So you can actually look for it, what it exactly means. But it's basically engineering change order, which requires some additional approvals. To you, it remains hidden. Because even if you have access to Jira, and I will show how it looks like later, you basically see just a feature. And then in behind, there is one extra Jira, which is blocking your Jira, which has subtasks for each reviewer. And right now, I believe it's for reviewers, something like security team, a level 3 support, and so on, who are saying whether change looks OK. There are weekly review meetings when they're actually going over all open DCOs. And once the approval is done, then we can proceed. It will be handed over to maintenance team, and so on and so on. Again, as I mentioned, all requests to sleep have to reference some issue. Either Jira, that would be JSC hash, and then the Jira ID. If you did request the open SUSE, sorry, Jira SUSE COM access, this open SUSE contributor, you may look there. If not, we could meet things on Monday. And now also Viki page for planning, for features for SP4, where we are referencing these features. And this is something that you should reference in DSR. Or the bug. For most bug fixes, to maintenance updates, you just need to reference the bugzilla. And that should be pretty fine. And again, if we will be rejecting this submit request, we are doing it with good intention, because we want to shorten the feedback loop. Otherwise, we would mirror it. It would get rejected. It would just take a few more days. Right now, we really need to do the filter, so we don't waste any once time. So how do these Jira looks like, especially for LEA-153? So if it's not a bug fix or CVE fix in general, but a version update, we need to have some sort of feature. And we have an open SUSE project within Jira, where we are tracking all of them. We know that this is not ideal, because requesting account takes some time. It's not that you can just log in and go there. We really have to make change on SUSE's LDAP, and so on, needs to be approved. Takes a few days. We are actually considering some public interface. Right now, Neil Gomp actually created the project LEAP slash features on the Pagger instance. We are considering if it could be the front end, where we would have all the features, and we would have to figure out some sort of sync probably manually in the beginning. But it would be probably more convenient. You could use your open SUSE login, just go there, create it, and then we would review them probably on our Mondays meetings, which I'm referencing here. We have it also on Wiki page. If you check the common entity, SLEE, feature, a change request, Wiki. And what we are doing is that we are collecting these features that are necessary for these version updates in SLEE. We are always monitoring if there is some progress, or what's the blocker. Talk about them. Talk about what's new, and just making sure that we are tracking everything that needs to be done to unblock progress on LEAP, the common entity part. This is the idea. Most feature requests are, hey, I need to update my package, and we need to update this version of library in SLEE. And this would be basically the case. So if you ever feel blocked on something, don't say, well, I will wait until SLEE updates it. Like, we can really request a change. There is good use case for it. I needed to update my software, so we can fix this back, for example, then this is exactly what is it used for. The attendees of these meetings are usually me, NEO, sometimes Gertian joins. I know Sara was there a few times as well, so feel free to join. I'm also considering maybe in the future to run it on slash bar rather than slash feature requests, because we have more attendance there, and it feels more natural to talk to these people. Just switch to this for half an hour, go through it. People can tell you their opinions, or maybe we can create some more requests if necessary. And I really want to be open about these. And in the end, it has to be tracked for Jira, but maybe we can use different front end, so it doesn't, you know, you don't have to really worry about the accounts. So we were talking about these origins. Sleep, backboard, we've also mentioned, front end or step. You know, Sleep, how do I check that? So I have also some examples here, which are a little bit tricky, like the LLVM. So you can use, we don't have forage manager in 15.3, so you can use OSC meta or the web browser. I will show it two slides after. And you would do OSC meta package, opens the ZLV 15.3 LLVM. You know that there is submit request redirection, so if you want to have it easy, just submit to leap. But, you know, if you really want to double check whether the package is not really a SLEEP package, you might recommend to use OSC meta. So there you would do LLVM, and you can grab it by project, or just, you know, it's the second line or first line. And you see that this package is coming from opens is a backwards SLEEP 15.3. So it seems like it's a community package, so it's a leap package. But in the end, this is one of the packages that we had to fork because of different, you know, expectations on SLEEP side and leap side, SLEEP has LLVM seven-year-old LLVM, and this is the meta package, you know, just saying which one is the default. And otherwise, the same versions of LLVM are available across both distributions, but we have different defaults, let's go this way. And now the question is, okay, so I see it's in SLEEP as well, it's in backwards, so where do I submit this submit request? And that's the tricky part, I guess, on the new model. So that really depends on some aspects. You can double check really quickly, like, and, you know, use common sense. Is it the identical sources? And if it's identical sources, then you'd probably know that we forked it because we need to rebuild it. Maybe we are utilizing the ESOPEN Susan macro, and we have something, you know, some different behavior on LEEP. And if they are different, then you know that we have an agreement to have completely different source code, like an LLVM case. And if it's completely different, then most likely the answer is that you want to, you know, contribute to SLEEP, sorry, because we have different sources, so we will work on these. If it's identical, then SLEEP is the answer, because we have to be in sync, and in most cases, we are doing forks, for example, because we have some extra multiple flavors, which we would like to have in LEEP and in package hub, but they are not supported in SLEEP side. So usually the SLEEP is the answer, and then we would, you know, copy the source code as soon as it's done and rebuild the package, so we can ship the additional flavors. And then if it's in SLEEP, and you see that, for example, the package just like here is coming from SLEEP15 SP1GA, which is EOL already, then that's tricky. Then I really recommend to sync up with us, but on the other hand, if you file a submit request to whatever even bits, SLEEP15 SP1GA, we will go for that review and we will say, oh, we probably need to send it to something newer. So in general, the answer is the latest supported service pack, or if you are doing rebase, the currently developed service pack, like SP4 in this case, for example. So it's a little bit tricky, and I know that it will require guidance, but we are here to help you. And again, as mentioned, we are reviewing every single submission, so it will be something suspicious. We will just let you know, and we have to redo it differently or help you to achieve what you need to achieve. Another case, branding packages, I've mentioned that are in LEAP, so this is very simple example. Again, you would do OSESR to LEAP153, just like in many, in the other cases. The only exception is really when we forward the package there, you have to think about where do I really want to, you know, update a code. So here just request to LEAP153, maintenance requests. So maintenance requests are currently tricky. So for previous code streams, you can just do SR against, let's say, LEAP15, I don't know, SP2 update, and it will work, but for SLEEP, 15 SP3 update, it currently doesn't work. We have a bug, and I'm discussing it without a build, how to get it fixed. Because if you do MR, it triggers the opens, as a maintenance request, and it doesn't get to SLEEP, so we have to improve this. But stay tuned, like it should be fixed within the next few days. And the maintenance work for documentation, including the SLEEP one, will be documented on the maintenance update process. Vicky, another way to check origin, I mentioned that you can do it also through web browser. So if you go to LEAP153 project, you can check the inherited wall. For packages that we are introducing in 153, it would be, or the branding packages, it would be the packages, but you want to see the inherited ones, because I told you, most of the LEAP is coming from backboards. And here, you just look for package and you see, oh, it's coming from SLEEP, or it's coming from backboards, and which service pack of SLEEP, this is the very first one, the zeroed service pack, it's just, we are not using SP0 in the name. So this is actually convenient way. One thing that I would like to say before discussion, because next step is just discussion, like listening to your pain points, and basically looking for some suggestions how to make this little bit more user friendly, contributor friendly, maybe it's the best word here. And I would like to ask you to join BARM, every single time when you have a moment, or when you want to discuss some of the issues, or hindrance, or whatever you have with contributing, there's always someone, and this is taken from half past 3 AM, and you can see CST time, and you can see that people are there, everybody's happy, it's just because we are from all around the globe. And now let's move to discussion, because otherwise this is the end of the presentation, let me switch then. And the new less room. Okay. Hey guys, can you hear me? We can hear you. Perfect, perfect, any questions? Like I know it's overwhelming, because there is a lot of information, so sorry for that, but unfortunately, this is how the new layout looks like. And if somebody has some questions, or something is still unclear, just ask. You know, this is why the discussion is more important than the actual slides, because we want to fix your problem, and not just, you know, talk about how it looks like. No one? Well. Yeah, Neil can talk about it a little bit, right? So Neil, let me give context to people. So Neil has mentioned this, one of the people who actually attend the meeting regularly, he's there every time. And you know, he has the experience with late features, not yet early features, because I think that we started relatively late, also the 15.3 had a really short development phase, because we could only start with sleepy top two, because of the NDA. So these days we will have much more time actually to work on some early features as well. Your turn, Neil. Okay.
What needs to happen to get your change into SUSE Linux Enterprise 15 and related openSUSE Leap.
10.5446/54689 (DOI)
if that an getting a delay so the questions that people can ask ask them in the chat. Hey Will, I've won. Yay! Hi everyone I can see you now. All right, all right and so I just hand it over and I'll read out the questions if you want. Wow. All right. Yeah. Oh, but we got some delay. Yes, there's a delay. Well, speed of light is not fast enough. No, it's copper wire. So, Geralt, did you want to start off with a presentation or did someone else want to take that? Well, I wasn't sure whether you would say one or two wise words or we just dive in. I, you know, I'm... Okay, now we're not working for me. You're done and your job is to send out t-shirts. Discrimination. Well, well, yeah, welcome everyone to the board session on behalf of the board and OpenSUSE. This unfortunately is our second event in a row as in the annual OpenSUSE conference that we all have to sit in front of screens smaller or larger. I think Simon has a bigger one. Mine is just 14 inch right now, but also nicer, all the nicer to connect. And before we dive into the board part, I wanted just to say a big thank you. A big thank you to everyone who helped put this together to organize, to speak, present, engage, listen. I mean, support in any way. You know, packaging some of the software, running some of the infrastructure that powers the conference, contributing to OpenSUSE over the years, being a user, even finding bug reports. Everything. It's really nice to have this community. This is a non-slide presentation. There's so many, so much screen time. There are so many slides that we just want to focus on the talk track. And before we dive in, let's welcome some of the new board members in alphabetical order, Headyan or Nureft, or however you pronounce that, was elected in the last elections to join the board again. I want to briefly introduce yourself. Okay, I'm Headyan. Listen to that pretty well, Gerald. Headyan aka Nureft. Yeah, I'm pretty much everywhere. I'm in the board. I'm still a forums admin, though I've become less active because life happened and work happened. I'm pretty active on the channels that we have on Discord, Telegram, Matrix, stuff like that. And what I really like is that we have managed to engage some younger people. They engage their friends. And we've seen it happen in the bar sessions that we have. I'm also a co-founder of the bar, by the way. That's what I do. And I live in Groningen, Netherlands. You're muted, Geroi. You're muted. I am muted. Yeah, the other one who joined in the last elections is Neil, for who every single board meeting is a strong testament of his dedication to open SUSEF because for him it's super very early morning. Right, yes. So my name is Neil Gampe. I've become a member of the board since January. And my dedication shows in that I have it. I wind up joining board meetings before I even get to do anything else, including eating breakfast, because it is seven o'clock in the morning, my time, when that happens. And I'm usually waking up literally 30 minutes beforehand. So I don't even have time to do much of anything other than clean up for the meeting. But yeah, so within open SUSEF I'm involved in the hero's team running a lot of the infrastructure, helping with improving the quality of our services. So for example, open SUSEF code, the pagrin since we have a code that opens SUSEF.org, that's something that Sa Siolan and myself run. We do a fair bit around packaging and improving the infrastructure and tools in the distribution. And I do a lot of, I want to say random, but not exactly random, like a wide variety of things across the entire spectrum of open SUSEF. From the software stack at the bottom for software management to graphical desktop tooling and stuff at the top to try to make the open SUSEF experience the best that it can be and try to help engage, bring open SUSEF to the wider community mindset of the open source world to make us more prominent, more visible, and more active and engaged, rather than like kind of sitting in a corner where nobody really hears about us. So like that's kind of what I try to aim and focus on. Cool. And then there's also SITS who joined us. SITS didn't come in through the election, but joined the board as a treasurer. So he's a co-opted board member. Yeah, so a non-voting board member. Some people like to point out sometimes. I love you anyway. Yeah, I'm glad to hear. But yeah, so I'm 35. I've been a financial controller or financial accountant as well for like 15 years now, working as an auditor, first like auditing financial statements. And then I worked for Teletoo as a financial controller. And then I worked for Novamedia, which is the postcode law freeze in the Netherlands. And now I'm freelance financial controller. People can just hire me. And one of the things that I really enjoy like in my free time is actually working or not really working, but I'm like spending time on open source and just computer stuff. That's completely a hobby of mine. I've not studied for it. I'm not a programmer. I just like it. Within OpenSUSA, I really like the MicroS project. I'm a big believer of an immutable OS future. I think that it's going to be used in loads of places like my daughter who's four year olds. Her laptop is a micro-OS laptop that she can just install some software via flatbacks and she cannot do anything else on it except for turning it off and on again. And so I'm just like Gerg-Jan. I live in Goudingen. I am married. I have two kids. One is hopefully still sleeping at the moment. The other one is just relaxing on the couch next to me. I'm watching some TV. And I joined the board now as a as a treasurer since financial stuff is sort of my skill. So I can help the board and I can help OpenSUSA with something that I can actually help with or contribute with. So thanks for that. Cool. And maybe a quick round of intros for the old board members starting with Axel who is old and new at the same time. Yeah. That's true. Thank you Gerard. Hi. I'm Axel. I was just recently re-elected for the second term on the board. I live in Düsseldorf. I have two kids just to pick this up where one of the kids still lives with me and he just came down preparing a sandwich and make me feel hungry. I'm mostly packaging stuff. Software that I think is useful. Software that I'm using myself. And I'm more focusing on the desktop side of life. So I'm running Tumbleweed with the latest KDE desktop. And yeah. I'm really happy about it. I'm OpenSUSA member since the 2000s I think was already some time ago. And beside this I'm mostly only active on the mailing list. So not so much on Discord or the forums but nevertheless I look into this every now and then. And in my daily life I work as a business consultant project manager with a focus on supply chain projects. Then we have Simon. I'm next. Okay. I'm Simon. I'm no longer the most dedicated member of the board. At one point I had to get up at 5.30am for meetings. Now they're just after dinner and that's far nicer. So thanks Neil. I'm in Australia. I've been on the board for three and a half years now. I package stuff in OpenSUSA like the Enlightenment desktop. And I also work for SUSA as part of the packaging and maintenance and security side of things. That's probably about it. Cool. And last but not least we for Vince. I'm Vince. I'm... Put your microphone up please Vince. Can't hear you. Is it better now? Yes. Okay. Hi I'm Vince. I'm working at Toxedo Computers as a product manager slash marketing slash whatever is to be done in technical stuff. And I kind of joined the board around end of February last year as I was not elected but appointed to an OpenSeat. And since then I'm serving on the board. Cool. And I realized I didn't talk, didn't introduce me and I didn't introduce myself. I didn't finish it. I realized now after hearing everyone else talk. I didn't actually say anything else about what I do and who I am. Everybody knows Geralt so he doesn't have to introduce himself. Okay. I'm Geralt or Jerry or Geralt. CTO at SUSA and chair of this board. Yeah. One thing as a board we have been thinking about I would say a fair bit on and off over the last months is how to improve transparency within OpenSUSE. How to, you know, what we from our side can do to help improve communications. And one of the things we realized is our board meetings might be good approach. And so 12 days ago we started with public board meetings. Given the time zone spread we have between mostly Simon, Central EMEA and US East Coast, this is what we found is pretty much the only one that works without making anyone doing calls at 5 am or past midnight. So unfortunately there's not much flexibility there. But I was very positively surprised by the first public board meeting we had. It was a handful of people. But it was one of the more, I would say of the more productive board meetings we had. And not only was it an opportunity to listen, we actually got really good interaction, got really good input. And I think the work we have been doing and some of the decisions and the conversations actually really, really benefited from that. So definitely something we're going to continue. Definitely something we are new to. So there is learning on our side, learning on the side of the board, learning on the side of the project. So we welcome feedback on how to make this more useful to you, how to make this more transparent, how to make this more open. And we'll make it better over time. And next one, an invitation Neil is going to send out an email soon. Next one is this coming Monday, 1pm Central European time, 7am Eastern Daylight in the US. And past dinner in Australia. What is it in Adelaide, Simon? If I press mute button I can talk. It's about 8.30. Okay. So that's really good. And one of the things, so that's one step we have taken. And another step we have started to look into and actually started to put into action is to make it easier to submit topics, to be more transparent on some of the most of the items as many as possible that the board handles, just publicly. And for that Neil actually has helped us move towards a cool new system that we are also evolving as we go. So that's another project that's actually started to deliver already. But we're also happy to receive feedback. And Neil, can you go into that a little bit and introduce folks to that? Yeah, sure. So as I said, when I was introducing myself, one of the things that I wanted to do was make OpenSUSA more visible. And that also includes two people in OpenSUSA. And something, as that had also dovetailed into my setting up the new code forge that we have for people to use to develop free software projects within the OpenSUSA project, I thought that it'd be a great idea to start using that so that we can give people an easy way to talk to us about things that are not sensitive so that we can not only do asynchronous communication that is easily tracked, and so we don't have to remember it in our heads, which was before this was actually a problem. We had to remember what we were going to do and figure out all that stuff. Now we have something that we can refer to. But also, because I wanted to make it so that if you didn't have to come up with what we needed to talk about as a group, I wanted to make sure that the community could also be involved in this process as well. And so if you go to code.openSUSA.org slash board slash tickets, you can see that we have a repo there with an issues tab where you can file an issue about something that you would like some feedback from the board on. And this may not necessarily turn into something that requires a meeting, but it certainly will turn into something where we can discuss this with the community in a structured, more visible form. And if it does require a meeting, like if it's something that we can't resolve in the ticket out of the gate, then we will mark it to be something for a meeting. And in our next public board meeting, we'll talk about it. And it's a very simple process up front. I wanted to keep it super lightweight so that, you know, because we're learning and we're figuring out how we want to do this. And as that evolves, we can implement more structures as needed or do more separations or whatever. But I wanted to keep this, I wanted the on ramp to be very, very simple. And I'm hoping what this does is it makes it, you know, more clear what the board actually does, what they can and can't do, and how we go about doing what we can do. So I'm not sure if you can read the chat there, Neil, but Leibos said you should mention feature requests moving there as well. Oh, yeah. So, no, I can't read the chat because I'm on another computer and venue list is an annoying platform that requires a token to log in every time. But, and I don't have the token on this, computer. But, yeah, so something that came up during the open Susa leap 15 top three development and I got hit with all the edge cases. So like I felt particularly in pain from this was, it was pretty hard to coordinate and figure out like what we needed to do. And I was fortunate enough to be in a semi privileged position to be able to interface with, you know, lovely Susa folks about it, but that like I meet me nearly one doesn't scale and the system that they're using makes it really difficult for everyone else to do it too. And so I've been talking to Leibos for a few weeks now about the idea of leveraging the same process that we've been using for, for the board to actually adapt that to do the same kind of feature request handling for open Susa leap for 15.4 because while it might be small up front, what I actually honestly expect that as we can develop this process to help the community be more engaged in the development of open Susa leap, both sleep and leap will benefit and there will also just be more stuff. After a certain point of wiki page is just not going to not going to work. And, and like when you need to track feedback and iterations on what's going on and sinking things back and forth between Susa internal and and the public community and making sure everyone is on the same page. Lubbush and I agreed that it would be a good idea to try this for 15.4. So I set up code.openSusa.org slash leap slash features as a repository for for doing exactly that. People can file feature requests about things that they need, whether it's for sleep or leap. And then Lubbush and I along with other release managers will make determinations based on whether it needs SUSA involvement for sleep components. And we will, you know, follow up and make sure that the community is supported to make their their things that they want to do and provide an open SUSA leap possible. We want to help make open SUSA leap the best quality stable long term supported enterprise Linux class distribution out there. With the latest and greatest KDE, of course. I was going to I was going to look how this is going to end like dramatic pause. Yeah, it was a long word. I had I was trying to come up with a word that describes exactly how I feel about open SUSA leap. And it turns out there isn't just one. And so I just went with all of them. You got the link. You got the link the wrong way around. It's actually late slash late slash features. I thought I said code slash leap slash features. Where everyone else the link is already in the chat. Oh, good. Sorry, I can't like just paste or type things because again wrong computer and venue list is dumb and does not let me just use login credentials or SSO. That's why you use Synergy to copy and paste it between your computers. If I had thought about that before I joined into this, I probably would have set that up. Thank you, Simon. I will remember that. But the next time I am doing something like this again between two computers on a virtual platform that doesn't let me log in between two different computers with the same credentials. Talking about which one of the one of the roles the board has has to deal with and that's not part of the public meetings is conflict resolution. Thank you. Thanks to the one of the other things the board is also very good at is going off topic and big side tracks. Chairman nuts. Yeah, not that you would ever see any of that here. And so one of the one of the thoughts that has come up that we have been working on a little and I'll defer to Hattian and Simon is moderation consistent moderation off across online channels. Yeah. One of the things we concluded was that there was a pretty big discrepancy between for example the mailing lists, the forums and our other online channels. I don't want to go into details that much, but one thing I can say is that the complaints that the board received were mostly about the mailing lists. So we started talking about moderating them. And we actually did. And I think we haven't done it long enough to define some proper result of it. But I think that our guiding principles are T and C, they should be across the community when we agree that to that as a board. And to add to that to also help with the consistency and for a bit of accountability. So that the board has an understanding of what goes on, what we're looking at implementing. I'll say looking up because we discussed at last meeting. And we haven't exactly figured it all out yet. And we have plenty of people to talk to about that that is getting everyone who's moderating a platform together in the same space, whether it's the forums or Discord or RC or mailing lists. So that we can all share information about who's being moderated. Partly so that the board has an idea of how many people we're moderating over a year, other than obvious spam. And so you can see any discrepancies. And partly it'll help us detect any issues that might arise if we can see that someone's never had any action taken on the platform I moderate. But Neil has already given them three warnings on some other platform. Then that's useful information for us to be able to share as a group of moderators. So if you moderate a platform, look forward to us getting in touch with you about that at some point soon once we figure it out. Yeah, exactly. And this is also somewhat consistent with the strategy that as a project that we're trying to have, which is we want to make sure that regardless of which platform that you're communicating with the community on, whether it's Matrix, Discord, Telegram or IRC from the real time perspective or forums or the mailing list from an asynchronous perspective, we want to make sure that those are relatively unified and consistent so that people who are talking to each other, there is no divide that makes it difficult for people to follow along with what's going on. And in the real time chats, this is being also handled by bridging all of the communications channels together. Now, this is not complete. This is still in progress because we literally just got our Matrix server working a couple of weeks ago. And we're trying to make sure that at least in the real time side, we want to make sure that if you're on Matrix, you can talk to someone who's on Discord, who can talk to someone who's on Telegram without having to be in all three places at the same time. And this is also something that this unified moderation stuff that we haven't quite figured out how to do yet is supposed to help keep things according to our code of conduct and our guiding principles and the way that we want. We want people to be friendly and awesome and have a lot of fun being part of the project and being part of the community. And when I say we will figure it out, the way means not just the board, but some of the moderators as well, most likely. Yeah, it's not just us. This is not an idea of being totally top down on here. Before we even started considering this, we engaged with some of the moderators from the various platforms to figure out what is even feasible and whether the idea is even good. And the reason we're even moving forward is because they told us that this actually would be helpful for them too, to make it better and make the experience better. And we want this to be a nice place to be in. Yeah, especially because we all think that, well, I personally have complaints from people being treated rude. And it means that we are losing users in such cases. And in a friendly environment, these new users that feel at home, that feel comfortable, they will pull in their friends. They will pull in their friends. That's growth, especially these young people. They're the future of OpenSuser and SUSER. Yeah, and those people who become excited and love the community and love the project, they want to learn how to make it better and they become contributors. That's how I started. I was just going to say in a rare occurrence, people in the chat think you're actually too quiet. Wait, what? I'm too quiet. Volume, not quantity. All right, let me... As you can tell, we actually do manage to have some fun on the board. I mean, there is loads of friends joking, usually going on in the meetings. And Neil said it, and I really want to emphasize, actually, I just looked it up to make sure I quoted properly. We want to have a lot of fun. And that's a literal quote from our guiding principles. Another one is we are working together in a friendly manner. And I'm sure in most cases, there is no bad intentions. But I've also seen the guiding principles sometimes used as a weapon. And for me, the guiding principles should stand for something. And obviously, if you stand for, you need to push back on some things. You need to defend the freedom. You need to defend the safe space that you have. But I think sometimes, smiling, thinking, is this really so important? Or is there another way of looking at this, then my way or the way I have right now is a really good thing. And unfortunately, in the tool, I don't see whether he is on. But last year, I had lots of conversations with Christian. And I can... Christian Bolz and I can tell you Christian is not something anyone can convince easily if he has a firm position. And I'm not someone you can convince easily if he has a firm position. But what I really appreciate it, and what I found really valuable is having this interest and interest to listen and understand and then maybe take a step back and say, you know what? I still think this shade of green is nicer, but I can't understand why you want that shade of green. And how can we put that together and do something that's checkers or on Mondays, we wear this shade and on Tuesday, we wear that shade. Or frankly, that has happened. And that's what I really... Because that's not easy. That's what I really appreciated in... To use that example. Conversations, sometimes you realize you're wrong and say, you know, that wasn't... I missed something. And the reason I'm mentioning this is interestingly enough, the conflict resolution in the last two years has actually shifted a bit on the board. Initially, it was mostly meaningless, hence meaningless addressing this. But in the last months, I say, we started getting about one... It's like once a month, complaints and cold contributions between reviewer and contributor or between co-maintenors or so. And there we don't have a good approach yet. So let's see how that involves... I'm hoping it's not the beginning of a trend and not hoping this is like a wave. But if any of you has interest and skills to see how we can actually put in place a group of people or something, the process to help navigate and mediate, that might be one of the things that we will have to face as a group soon. Yeah, something about that particular topic, I think personally speaking, I think that one of the reasons why we're seeing this is because as we... I think to some extent, they were kind of always there. But it was more of which ones were louder and which ones were more of a problem. As we have started solving these problems and started working with the other members in our community to make sustainable solutions to handling this stuff, the frequency of this stuff has gone down. This makes me so happy. The frequency of the stuff that comes to us has gone way down. And we're able to tackle some of the other aspects of this too. In some respects, we've never fully implemented our guiding principles in a way that makes it consistent and understandable to the community when it comes to how we should work with each other and that sort of thing. And getting to that point where with this year of doom caused by a worldwide pandemic, I think it's just amplified that particular gap that we've had. And now that we've been solving it, things are starting to get better. I've noticed in a number of chats and a number of spaces that people are happier again, like communicating with each other and using and starting to want to contribute to OpenSUSA. And that is making me very happy because I never liked seeing people leave because they felt like they were not able to be heard or not able to feel welcome or any of those sorts of things. And so, yeah, I don't have any more structured thoughts on this. I just want to, you kind of hit on it, Neil. Ben actually asked the question, have you seen more an increase in the code of conduct and you kind of like definitely hit on to that. So I just want to sum up the questions. I know they're being answered, but I thought Attila brought up a really interesting one that kind of relates to our panel. And I think that kind of relates to our panel yesterday, which was, you know, interconnecting all the communication channels. The newer aspect of mediating it, you know, the trolls tend to move out pretty quick and they appear somewhere else. Is there anything? So something that has been an interesting side effect about bridging the real-time chats and unifying the moderation control across Matrix Telegram and Discord was that we don't get people who have tried, attempted to evade our bands anymore because as soon as they tried, they get whacked again anyway. And so people who act dishonorably in any of the chats, just they get pushed out fairly quickly and it avoids, it makes it so that it's not a problem. And in general, that has actually changed the balance of people who come in. Like it used to be that we get people coming in saying frankly stupid things and stuff that's deliberately trollish or whatever. And then they would just hide in a corner, you know, pretend to be nice a little bit and then do it again later and in another platform. So they would evade one platform, jump to another and then come back and then and that sort of thing. With the real-time chats, nobody does that anymore because they can't, it doesn't work. We know that they're them, we know what's happening and it gets shut down very quickly. So, the M-attacks are basically dead. To add Neil, over time we have gotten together a pretty nice team of admins, moderators, etc. all over the world. So there's pretty basically always someone there that can moderate. Yeah, I think the only platform where we have a true gap here is IRC. And that's been complicated by recent events that has made this difficult. And Simon, you're muted. I was going to say in some ways the recent events have actually fixed it. That's true. And we now do have a better spread of moderators, partly because I'm now one in the times when no one else is awake. Right. Yeah, the spammers on IRC that show up at midnight in my time where there's no coverage is now covered by Simon. Lucky me. Maybe people are also getting happier now with COVID being over or getting over and summer coming and stuff. That could absolutely be a factor. It's winter and rainy here. Yeah, I'm sweating now. I don't think COVID has anything to do with it. No, I meant like the lockdowns, people being locked up and getting annoyed with not being able to go anywhere. To answer to answer Ben's question, have you seen an increase and how have you dealt with it maybe before we covered the last topic we had prepared and then completely open up? Yeah, we have seen that. And I'm not claiming there's a causal relationship. There is a very clear temporal relationship. I'm often accused as being, as airing on the side of softness. I'm usually trying to balance and mediate. But there was one board meeting end of last year where I essentially think I started the meeting and say, okay, everyone, enough is enough. We need to take some, we need to take strong action. And that may have been the one board meeting where most warnings and moderations, etc. were more issued than I think, than the year before. I mean, it still wasn't a huge number. But I think maybe sometimes becoming a little stricter and sending a signal can help. Yeah. Maybe to add one word, when we're having conflicts and we're trying to, to moderate, we try to hear both sides and then talk to the people, try to get a moderated discussion between the individuals involved. Unfortunately, we found out that this is not working in all cases because sometimes people just refuse to talk. And if somebody from the community has an idea how we can cope with that, I think that would be, would be very much welcome as well. Right? Of course, I mean, we can only hear help if the people are willing to listen to the other one and to pick the information up and to think about the own behavior. But if they're completely rejecting to talk and saying, Hey, this is my stake, my point, and I'm not moving, what do we do? With that pause, I don't think we really mentioned, but go ahead and if you have questions, please put them in the chat on stage one. And now I don't really know how to bridge. Continue on, Giorg, continue on. I can just start talking. Let's end on a happy note, which is budget. Yay, money. Maybe we can take one thing along as a last subject, because then Lubos asks us to talk about collaboration with other distros or communities. It's just moderated towards SIDS to talk about budget. Which was an absolutely unprepared. Let's have SIDS talk about budget. Okay, you mentioned Christian already. One more thing. Oh, boy. And then another three subjects. No, go ahead, general. SIDS is the stage. So I started the board, or I joined the board, because I wanted to help as a treasurer and to actually help with the financial side of the project. Mainly as well, because I figured something with a foundation would be a good idea. And with the foundation, it's probably also very useful to have someone that has some sense of actual financial statements, of reporting, of tax filings and stuff. But that's not the only thing that the treasurer draws, of course, because everything else just continues. That was there before, like the travel support program. Last year, we didn't really have any travel support. So that was kind of easy for me to start with. And starting also as a liaison between SUSE and Open SUSE, I've been trying to talk to people. I've spoken to Andrew about what he was doing. I've spoken to a few more people like Doug. And I'm just trying to get a sense of everything that is happening financially with the project and how Open SUSE and SUSE are intertwined. And what we can actually start to do to even make a foundation, if that's actually financially possible. So that's sort of what I've been doing. And that's what I wanted to say about it. I don't know if anyone has any questions about it in a few minutes. Okay, cool. So before we go further, we had a half item on the topic, and someone's asked a question about the foundation. So I guess that means I get to talk about it. Basically, the status on the foundation is not much has changed since we made the proposal at the conference several years ago, as people would be well aware. SUSE management has completely changed a couple of times. And so we're basically at the point where we need some people to sit down and write a business case to a large extent financially about what we need, how it will save SUSE money, which it should do, and stuff like that. So we can start to present that to the new SUSE management and make a really compelling case. That is something I'm happy to work on at some point when I get time. I don't know quite when that is yet, hopefully soon. But if others would like to work on that as well, get in touch and see what we can do. You meant probably others outside the board because we have already started and prepared a basic presentation about it. And just thinking loud, if we're not moving forward with the foundation which needs a certain involvement of SUSE as well, maybe it's an idea to start with a kind of supporting organization. What do I know? The GECO association or something like that, which is dedicated to support the Open SUSE project. But it's not a foundation in that case, but could be a much lighter thing. Many schools in Germany have these kind of supporting associations. So when your kids enter the school, they ask you, want you to join this association, then they're collecting money and giving this to the schools to buy stuff that they are not able from the budget that they're getting from the government. Yeah, and that's also kind of a thing in the United States as well. There's a specific class of charities that is designed for clubs and associations to be able to handle that sort of thing. My personal opinion about having a foundation or a charity organization supporting it or owning the Open SUSE project is I really would like for us to behave more like we are that way before we actually are. Because it's a lot less dangerous to make mistakes when you don't have that legal structure underneath you up front. And I want to make sure we have those practices developed and things like that before we make that jump because I don't want to get that wrong. So before we did our last foundation related proposal, probably a year before that, which would have been before. I think everyone other than Neft was on the board. We did look at that. And the conclusion we came to was it wasn't really feasible in that capacity. And we should either pursue a full fledged foundation or we're not going to get benefits that outweigh the risks from the other one. In between, we changed it or we shifted the focus a little bit. The idea at that point was to build an EV in Eingertragner Verein as we call it in Germany and to transfer all the the trades mark and everything to this foundation. This was before that. We looked at the idea you just proposed as well. Would have been before you're on the board. Regardless of that's probably future discussions anyway. Regardless of what mechanism of implementing a legal structure for the Open SUSE project, I personally am not comfortable with the idea of doing so until we've done practice rounds where we actually behave like we have one before we actually have one. It was important that we got SIDS on the board and actually our treasurer and actually participating as a non-voting board member to me because it means that we are at least more closely getting set up to be able to handle things properly before we actually have to. One of the key things that's always come up in this discussion is it's great that we now have SIDS but if in 10 years time we don't have SIDS and we're not doing our reporting then the legal fines we could get from not meeting our reporting would be enough to completely sink the project. One thing that we've always maintained strongly in our previous proposals is that we would like SUSE to help sponsor Open SUSE by means of helping to cover some of that paperwork in which case it becomes a non-issue for anyone we have a guarantee that we're going to meet our legal reporting requirements because SUSE would provide resources to do that. That's obviously one of the things that if we come up with any sort of business case for a foundation we need to be showing that there is large benefits to a foundation that offset that cost and that cost would and one of the things we looked at with the smaller organizations is even those smaller organizations require some level of paperwork and would inherently have that risk. That's right. We're probably as Gerald would say we're probably now discussing detailed semantics of stuff that doesn't need to be discussed in detail in a board meeting at this point which is the other thing we do along with getting sidetracked all the time. Yeah we get into the details far more than we probably should which makes our time. At least we stopped taking three-hour meetings for a one-hour time slot so there's that. So speaking of running over time just to let you know we can continue as we're not over time but continue as long as the next speaker comes in so continue as long as you want. I was going to suggest that maybe this is a good time for us to move into the Q&A stage to make it easier for anyone else who wants to ask us questions. I can't see anything else popping up in the chat. Hartan mentioned that someone had actually asked for us to talk about a collaboration topic earlier so I guess that would be good to start back up with. So what was the topic in particular Hartan? Let me read out the question Ben asked. You, Lubos asked, by the way one thing that I forgot to Gerald when he was asking for topics was collab with other distributions Ben started Distributors.club which I think is a pretty cool idea. What's your thoughts? I've been doing a lot of hanging around with Fedora guys that sort of happened thanks to Neil. But I found that we discuss issues a lot in the bar and on the chat channels, issues that both communities meet and we also think that from a perspective that we have in our guiding principles why not be nice to other distributions and cooperate. Why have teams on both sides working on the same upstream bug? If we could find a way and it will take time to get it more coordinated. I know there's the political issue that it would be also Susie and Red Hat cooperating maybe but the thought why do the work all over again and cooperate. We can learn from in my opinion their organizational structure and the people from Fedora admit that we have a lot of tools that they could learn from that we do stuff in a way that they could do better. So I guess I could say that in my role as part of SUSE I see that in some areas we do do this really well already but also on the SUSE side especially with regards to security issues. We do a good job of working with other distros to have fixes for issues before you even know there are issues. So in that space where the collaboration is most important there already is a lot but there are certainly plenty of other places where we can learn. Or as I'm not sure if it's an Australian saying but we should definitely be shamelessly stealing ideas from other people. It's an American saying as well for sure. Good ideas. Yeah I think this collaboration actually happens already and I love to see that and it's at different levels. In some cases it's one person who's a one foot here one foot there. I mean Ben if you ever meet I'll ask you whether Neil when he talks with you starts every third question with in open SUSE we do because that sometimes sometimes is a run where he says in Fedora we do but it's good because that's actually breaching that sharing hey I've seen this here and it works. I mean the other one. I have my Fedora hat for such cases. Yeah learning. If you look at mailing list posts on the Fedora mailing list I mentioned open SUSE a lot. I mentioned it. It comes up a lot where I basically hammer like some of our tooling and processes around this stuff just kind of sucks and these problems are very well solved in open SUSE. Why can't we do stuff like it there here because it's just I'm spoiled now. I don't want to have to do the crap work all the time anymore. The thing is like it doesn't work in open SUSE fix it now please. So I think it's it's it's there are individuals cross and you know have anchors feed through in more in one community. Other models I've seen work very nicely is I know that from the toolchain people because I mean I've been engaging in GCC for like a long time and it's just for the red and green or blue and green people to work together. So I mean one does the SUSE side one does the Fedora Fedora side and you know using the new version of GCC and 229 packages break but sitting here and sitting there you work together right and you don't care it's just we work on together on the joint project and there is at the project level and so one thing that I at first when I because I was not involved so I can't take any credit as much as this is something to really take credit and give kudos to whoever did is the is the mutual sponsoring and this relationship that Fedora and open SUSE have right. I mean that's the first time so I was like this isn't this a little odd and then actually no because there is so much commonality and I don't know I mean Dagis instrumental I'm sure or has been in that and using amazing keeping that and that's you know this thinking a little out of out of the box and out of like this is my territory and everyone else go hey that that goes that goes great lengths and yeah I'm totally open let's do let's do more of that it's nice to have this identity right in a relationship in a partnership whatever doesn't mean you become the other there is things that that Fedora will that will be differentiate Fedora positively and hopefully there are things that will will keep to differentiate open SUSE positively and and that's fine mascot that's we have a cool mascot there is no mascot for Fedora sadly it's by everyone's best efforts we have a cool and then have this I think they have this friendly competition where you have differentiation and but you also you know when you see something works well work together and I think this together and apart in the in the in the constructive manner I mean that's great that that is really that is really I think one of the strong points of open source one of the good things about the two communities for Fedora and open SUSE is that I've met quite some people from Fedora and I joined their release party etc and I never met one single person that objected against Neil being a board member of open SUSE not one single so apparently nobody cares I've seen that differently in other communities so have I oh my gosh you know it's not just Fedora and open SUSE I mean I mean another I mean a number of other distributions why are open SUSE have a level of cooperation and friendliness that I have not experienced in other communities and that has made it has made it a lot easier for me to bring all the interesting things that I come up with regardless of where it starts from whether it's in Fedora open SUSE and bring it to both and in other places it's more difficult and it's and and it makes me sad sometimes because you know at at the end of the day what we're trying to do is to promote free software and to and and to make it the default choice whether you're you're just working whether you're playing whether you're living whether you whatever you're doing and in order to make free software successful we've got to we have to work together to produce the best free software and I don't know how else we would do it unless we like like certainly there's going to be some duplicative effort but that effort can actually be useful because some of the other perspectives can see some of the other paths that are being tried to see whether they work or they don't and eventually we can resolve to common to a common path but being able to experiment to be able to try these different paths and then see where they fail and where they succeed is ultimately how we can also help in in collaboration um real quick thing I wanted to go to Patrick I know I know Simon you've been responding a bit to Patrick's comments in there he does bring up a valid point and you do as well like you there is a system in place to like keep the knowledge and share the knowledge but you know where what's the succession plan I think that could be probably something that could be part of the elections um so something too list but yeah I mean I guess I can talk on that slightly because I already answered in chat so I'll answer what I said in chat for people aren't reading it one of the one of the questions was how do we train new board members and so as part of the design of the board is outlined in our election rules everyone has two terms and after their two terms they have to have a break which means that we always end up with a some older more experienced people such as myself sitting on the board alongside some new people such as Neil but we have particularly when Gerald joined because when Gerald joined we discovered that Richard is the previous chairperson also held a lot of that knowledge and previous boards probably hadn't done the best at writing that down so Gerald has helpful has hopefully started to create a wiki page with such knowledge and important information to make that process easier but probably around election time we could do a better job of explaining what the actual role on the board is and what we're mostly doing so that we help get the best candidates which at times certain people have tried to do if someone there have been times when people have run for the board on a campaign of I want to achieve this this and this for the community and this and that and people have been able to say well that's generally not what the role of the board is we more focus on this and that but I think it's probably something we can communicate better leading into elections what our role is and what we're doing most of the time I would say something like what is the succession plan or something to that effect you know as or to be addressed you know as a common thing that could go forward if you understand me yeah okay now that we have a new system I can go and create a ticket about that and we'll discuss it further at another meeting yeah and the new system has a couple of other features that we might use for this purpose like it can host static websites that can be managed through pull requests to update content in a passion that makes it easier to review and stuff like that none of that is set up yet because nobody's actually asked for it so far but some of the stuff that we do as the member of the board where we're like you know we have new information we want to you know cumulatively update that while also maintaining all the history and authorship and being able to see you know what and why and all those things and also just having a more consistent discoverable place to find all this stuff like some of the issues that we had is some of it that Richard did in fact document but we couldn't find it because finding where it was wikis are hard wikis are very hard so some of this is also gonna I think we'll try to bring it together into a more discoverable place maybe it's just markdown files in the board repo that we have maybe it'll be a doc site or something like that that we that we have set in the in the board project or something I don't know yet but there's a couple of options of like how do we want to make this a little bit more discoverable anyway we now have a ticket for that it's in all the charts and in the future I guess maybe something that would be useful in that ticket is if you see us talking about this and you think you could possibly like to be a candidate in the next election and there's stuff you're not sure about add stuff to the ticket so that we know to help us know what to address goodness knows that I was confused when I started doing when I started trying to figure out what I was supposed to do I think Simon and I had a couple of hours of conversation about this before I even like you know throughout my candidacy because I just didn't know what I would be what I could even do that's probably yes and on that point also feel free to just ask current members of the board we're mostly a friendly bunch we mostly don't buy it we mostly talk to you about it in a reasonable way mostly qualifier we mostly mock each other and not people that come to us yeah as like Simon does on apparently a regular basis to me yeah I should mention Richard actually did it hand over a fair bit of knowledge and information to me so he put great effort into that which was very valuable I think it's the sometimes it was the easy things that you know that the more day-to-day things that nobody thinks about because you're on the board for for a couple of years and everything for you is very natural that this is this is how we do it right and so some of those I had to I had to discover and I'm sure and that's a natural process for everyone in a new role I mean that just is natural for every every new board member and it would always be but how can we make it more I mean how yeah how can we in the in the light of succession how can we document things even some of the easier you know how do you just we change the board meeting the time of the board meeting how do we how do we change your reminder email because there's actually a rather clever system and that's and I think one of the heroes probably Christian helped me with that and once you know it it's easy but if you if there's an email you get and you don't know where it's coming from then you say many boards have struggled many times with that to get to him now it's documented I documented and now it's really simple so it's that kind of thing I mean and that's not that's not critical for a open SUSE project but yeah really some of the best practices and and it's more transparent for others so I think it's good to onboard new board members it increases transparency which is a good thing by itself and and also if you if you write it down that's actually something then you can work with because maybe you realize or a new board realizes we want to handle it differently but if it's if it's written then it's something that's more tangible and then then you can explicitly tackle that so um but you're all right I think that we have to thank Simon first he needs to go to bed yeah he just wrote that yeah I'll get a bit eventually not quite yet but okay sometime in the next 10 or 15 minutes okay do we have any any questions left I don't think so what's your time now Simon it doesn't look like any question that's a great question it's 10 past 11 at night so that's the time that your kids would know go out right yeah they're not quite old enough yet but I do have to get up at six okay that's another thing it's Sunday tomorrow come on what are you doing on Sunday at six I mean going for a run could be one thing at least at our temperatures at the moment but I don't think this is the thing that's running but no I'm getting up and operating some cameras for a church I guess I'll probably try to close out at least some portions the one thing I do want to say I want to thank you all for you know participating with the with us you know this happens every year and I think the community looks forward to it but I also want to thank all of our sponsors that you know have participated in that and one big thing that's like happening here and is the video team and they have done incredible work and they do every year so you know a lot of appreciation goes out to their absolutely shout out to them big thanks up definitely and and you know actually I didn't explicitly mention the video team I think there is we have volunteers that helped moderate the sessions so beyond those special specific cases I mentioned it's just amazing there's a lot of work going in an event like this and I mean doc you know because you're the guy who actually pulls and holds it together um talking about succession plan or backup plan we need to dump your brain at one point or clone you um yeah thank you it gets it's inventarized right freeze will freeze my brain or something we'll put you into into a robot that way you know you can you don't have to sleep anymore forever yeah maybe then we can share doc with fedora he you know half of the day he does open susan half of the day fedora I'll I'll check with Ben I think I want to keep Doug for open susan he's he's like our secret sauce here so you're not too much you're not completely into sharing I I'm sensing I like Doug too much I don't want him to burn out with both yeah one is enough that's good good thanks everyone yeah thank you thank you yeah thank you doc thank you guys thank you everybody that was here yeah thank you all for for this great event and thank you all for coming as well I should turn my mic on when I talk but I was gonna say I won't be one of them because I'm gonna go sleep but I'm expecting there'll probably be more board members in the bar on jitsey for most of the rest of the afternoon if you want to talk to people and I'll put that link in the chat somewhere okay good idea all right thanks everyone it looks like we're gonna take it off so okay where do where do we go
Question and answers session with the openSUSE Board
10.5446/54693 (DOI)
Good morning, good afternoon, or good evening wherever you are in the world, and welcome to my talk about Multi-Bill Python here in this beautiful 2020 continuation. So first a little bit about me, I live in Sydney, Australia, surrounded by wildlife that wants to kill or maim me, and I have roughly 20 years experience with packaging software. Most of that is Debian based, but a lot of the concepts remain the same across disciplines and distributions. I started with Python in roughly the year 2001, and I quickly fell in love with it. So I've been following along with it from the early days of 1.4. I've been at SusanL for coming up to five years, and I've spent just over a year on the Python packaging team with a few others. So I'm going to assume that you have some familiarity with the RPM build system, how macro expansion works, and a tiny bit of knowledge about OBS. So which versions should we ship? Tumbleweed currently ships the full stack for Python 3.6, 3.8, and 3.9. So this means we make every effort to fix build failures for leaf packages in those versions. We also currently ship Python 2.7 in Tumbleweed, but we're trying to remove it, and disentangling it is quite hard work. We don't build any leaf packages for 2.7, and a lot of upstreams are shifting their focus away from it, obviously. The main version of Python, so that is what you get when you run user bin Python 3, is currently 3.8. Leap 15 ships 2.7 and 3.6 as its fully supported versions, so we build packages there for 2.7 and 3.6. We do ship later versions, but only with the interpreter, PIP, and setup tools, and so we don't build leaf packages for those later versions. So there's this disconnect between Python base and the Python package. Python base ships the actual binary, and the parts of the standard library that don't require external dependencies because it's contained in ring 1. The Python package contains everything else, because some modules require external C libraries and things like that. That's a little bit of a lie too, because Python base requires Bluetooth, but we get around that. Python develop includes the header files, like Python.h, and we of course build documentation in the Python docs package. Python as a bare word in package names means Python 2, whereas Python 3 has the version number in the name, such as Python 3.8.Develop or Python 3.9.Base. So the macros I'm going to talk about are contained in this package. They're standard RPM macros, and that means they're written in Lua and not Python, so I'm going to skip over the internals. So I introduce this macro first, because pretty much every other macro uses it. We set this macro in the tumbleweed project config and delete project config to which versions of Python we want to build modules for. So this is currently set to Python 3.6, Python 3.8, and Python 3.9 for tumbleweed. If you don't specifically override it, that's what you'll get. You can override it if you wish, and if you only want to build for a specific version of Python, you can set this to Python 3, and we call this single spec. So you'll only build for whatever the main version of Python is when it's built. So some complex packages do this like TensorFlow. So Python build and Python install, these two macros both just basically call setuppy for each version. Python build, call setuppy build, and Python install does the same thing, and calls setuppy install. So Python expand, loops over whatever is defined in percent Python, so you can run things for each version of Python. For instance, you may wish to remove something in each site package directory, or run a custom test runner, or run F-dupes, or something. I include this macro fairly early because a lot of the macros that I mentioned further on down the track use this internally. So from time to time, you may need to know what version of Python you're building under. So you can, for example, on the build require on a module for one version of Python only. So these two macros help with that. So Python version returns the minor version, so 3.6, and the no-dots variant returns a whole number, so 36. So skip Python is a definition and not a macro. Set it to one, and that will instruct other macros such as Python expand that they should skip this version. So, for example, percent defined skip underscore Python 3.6.1. You could specify it multiple times, but any more than two, you should probably think about setting percent pythons. And you can't use this definition to switch to single spec because if you do, then the package you're left with will contain that version and not be called, for example, Python 3. It will be Python 3.8 something. So Python module, we use this in build requires. It will add on Python dash for you, and will also loop over every version in pythons. So build requires Python underscore module requests, will install Python 3.6 requests, Python 3.8 requests, and Python 3.9 requests. This can also get a little awkward when modules already have Python in the names, so you get things like build requires Python module Python dash foo. And due to the way this expansion works, you can specify Python module develop, which will pull in, for example, Python 3.8 develop. So this line always exists in a lot of spec files. It defines an expansion for Python module if it doesn't exist, so that it will pull in Python foo and Python 3 foo. If your spec file is really only for tumbleweed and it won't go anywhere older, then you can, you don't need to, so you can remove it. But it also doesn't hurt because it will only take effect if Python module isn't defined. So Python sub packages, Python files, this will expand your source package to Python X name. So for example, Python 3.9 Django, where it includes the version number. The macro effectively generates new spec files for each version in encounters and along with its friend Python files, and that expands the percent files section. I've listed these macros together because you have to use them together. If you have one without the other, you'll get errors. So since a fair number of packages ship entry points that are installed into user bin, this means we need to specifically handle that case when we build packages for multiple versions, since we're not able to ship the same file in multiple packages. Twisted is a really nice complex example of this because it ships a lot of entry points. So we use clone to clone the script which moves it from script name to script name dash version. And then in the file section, we mark it as an alternative. Don't forget to requires post update alternatives as well as requires post done update alternatives. And be sure to add Python install alternative and its opposite Python uninstall alternative to the post and post done sections. Don't worry if you missed all of this, it will come up later. So Python site lib, Python site arch, they're expand to the Arch independent site packages. So distutils calls this pure lib as in modules that are pure Python. And that's most often used because most modules are pure Python. And the arch dependent one distutils calls this plat lib. And you'll use that for packages that build a shared object, a.so file or use something like a CFFI. So these testing macros, so Python, Python unit test and its architecture cousin, they set the Python path environment variable and then call Python minus M unit test discover, which discovers tests and then runs them. Percent pi test and its arch dependent cousin, percent pi test underscore arch sets the Python path and then calls pi test to do the same. Some packages still call the last one on the list. But upstream setup tools has deprecated that use of setup pi. So you should update them to use the first macro. But also you shouldn't feel they need to be constrained by these macros only to pick on twisted again. It, for example, uses trial in its check section to run tests. So let's put this all together. So we have a simple example, Python cranes. I've trimmed down the spec file, so it'll fit on screen. I've dropped things like version release description and the prep section. So you can see we only really require setup tools, Python module and the RPM macros package to build it. So this package also contains no tests, so there's no check section. But this shows how Python module, Python sub packages, Python build, Python install, Python files and the Python site led macros all work together in one spec file. So if we build it, it gives us the source RPM and the three binary packages for each Python version. And if we pick on Python 3.0 in cranes, you can see it ships the py cache files, the egg info, the cranes module itself, it's readme and license. So this is a more complex example, Python raven. I've had to trim it down a lot more because it's a lot larger. You can see that we're using version qualifiers in Python module build requires. And this is the update alternatives requirement that I mentioned before. So build is very simple here. But during install, we need to clone the raven CLI tool and run FJUPS. During check, we export an environment variable for Django's benefit and then we use the percent py test macro. And as you can see, we can pass command load arguments to py test using that macro. So here we install and uninstall the alternative and we also mark it as an alternative in the files section. As a bonus example from Python talks, we can see here if guards for a build requires, we can't use Python module here because that will expand against pythons and we can't do that since import the metadata only exists for Python 3.6. We also need to check for both Python 3 base and Python 3.6 base so we handle both tumbleweed and leap since this may be back ported to either distribution. So future plans that we have, we're discussing allowing user control over which version of Python is run for user bin Python 3. But this gets a bit complicated and may in fact not work out well for things like single spec and things like that. We'd also very much like to remove Python 2 from tumbleweed but it's pretty firmly wedged in there with a lot of things, unfortunately still requiring it. So would you like to help? You can of course file bugs on any packages. You can send us a submit request to our development projects in OBS which is develop languages Python or subprojects and we have an ISE channel on LibreChat which is %hash openSuserPython. And thank you for listening. Are there any questions?
Shipping modules for multiple version of Python, all at once openSUSE Tumbleweed now provides the Python interpreter and packages built for Python 3.6, 3.8 and 3.9. In this talk, I'll go through broadly how the interpreter is packaged, how module packages are built, how packagers can use the provided macros to their benefit, some sharp edges to watch out for, and what future plans we have. Please note: I am on the East Coast of Australia, so please be aware of that when scheduling.
10.5446/54694 (DOI)
Alright, everyone, I hope there is anyone, can't check because it's pre-recorded. OpenSousaDux, tame the beast, make it a friend, I am Adrian. A love with me on this lovely front page is Attila, who unfortunately won't be able to present the talk with me, but that's okay. And by the way, Attila, if you listen to this and you disagree with something, it's your fault man. You can't blame anyone else. Okay, so who are we? So Attila lives in Indonesia, he's a DevOps and Assist admin, and he's also an entrepreneur, and he works for the company. He has founded, and I live in Switzerland. I'm always through my PhD, and it's a long story, but long story short, it's about concepts and linguistics. It doesn't really interest us today. And I'm a Python and Haskell fine boy. On the menu today, now and how it started, because yeah, it's been quite a journey. We started doing documentation for OpenSUSE last autumn, if I'm not mistaken. So yeah, that's going to be the bio moment of this talk. And then we will ride the Tumblr weed wave, and I will explain to you, but I'm sure you already agree, why Wikis are a trap. And then we shall, if you will, discuss the vision we have of moving the docs under maintainership, as opposed to just doing things on Wikis and on these type of hit and run platforms where everyone can say everything. And I will explain why it didn't solve all of our problems. And then we will talk about saving the world, nothing less and nothing more, and why documentation is underrated and why you, and that is a plural and also a singular and it covers everyone, everywhere at any time in the universe, why you should help us, or at least be motivated to help us. So a look back, started late November 2020, it started well, and that doesn't mean it, that doesn't mean now is worse, but if you look at our contribution graphs, I mean we had a lot of different people who have helped, of course you can't expect people to commit full time to a new project, but it's quite a good debut I would say. And the goal was to have the nine first sections of the table of contents covered by summer 2021, and we have only six. Yeah, we are slightly beyond schedule, I know, we know. Why? How lazy? How incompetent can we be? And you will understand. But first let me tell you about why we got into documentation. First the altruist reasons. First good documentation takes some weight off the shoulders of user support, and that's quite obvious, if people harass the user support for technical questions, the user support are not doing anything else. So yeah, you want to make sure that most of the questions can be handled by the docs, so that people don't have to ask for help. The second reason is it makes sure, documentation makes sure that time and energy is not spent twice if someone solves an issue. That issue should be considered solved for everyone else, and you don't want to have people reinvent the wheel every time they find a solution to a problem. And the third reason which is not often thought of is good documentation equalizes opportunities. Why? because if you are not lucky enough to talk to the most knowledgeable person around, think just of people living abroad in linguistic communities which are not tightly connected to the English speaking world. If that is your case, you are worse off everybody else who is lucky enough to know someone around who can help. So we want to make sure that no one is left behind, and for that documentation is really important. How did we get into documentation now? The egoistic reasons, well, documenting something tests your understanding of it. So you really dig into details and you find unsuspected things that you had not thought of and that is a very pleasant experience. And committing to documentation is a nice incentive for learning. And what I mean by that is on that point I can't speak for it till I bit for myself, I almost didn't know anything. And I started writing docs not because I knew, but because I wanted to know. And that is something that is very rewarding for me. Let's talk about tumbleweed. Because tumbleweed, I mean everyone around the open sous-é ecosystem knows more or less than Leapy is well covered just because it is the spring off from SLES. So you would expect you have some solid and healthy documentation available. But tumbleweed it's another story. Tumbleweed by the way takes some understanding to be taken advantage of. So what does the documentation look like for people interested in tumbleweed? Are they going to turn to the official Leapy reference manuals? No. Or at least not only. Because the Leapy reference manuals do not talk about tumbleweed at all. I mean it's not just the word that is not mentioned. It's just that all the tumbleweed specific workflow and tools are either not talked about at all or talked about only with Leapy in mind. So this falls short for people interested in tumbleweed. Because the workflow between rolling in fixed point release is so different. So what are they going to do? Turn to open sous-é wikis, perhaps the SDP, no cigar. There is no explicit or visible maintainership so you can't trust anything. That's because you don't know who's vouching for what you're reading. Who is your safeguard? And that enables me to make a broader point about wikis. As the wikis are a trap, parts I mentioned earlier. So in wikis you have an absolute absence of visible maintainership. So that means the truth is contaminated with uncertainty. And let me unpack this for you. So how are you going to tell up to date and factually correct information from outstated or incorrect? It's not an easy task at all. You cannot tell apart up to date and factually correct from what is not. So the user has no rational option but to try and pray, or maybe worse, go to Fedora or sleep with Ubuntu. But no matter what, it just doesn't do the job. So wikis work just like in hyper-requisitories, only when you can keep your promise, honor the contract with the user that contents are curated and maintained. If you cannot keep your promise, don't make things appear as if you were able to keep it. Don't fake it. You can't. So say you can't. At this point I anticipate some objections and replies. So what about Wikipedia? Isn't Wikipedia reliable sometimes? Sure, but they pay people. So that doesn't count. And what about ARC? Well, ARC, what about them? They use wiki categories as GitHub repos and most of the contents are written by an extremely thin layer of expert contributors, just like GitHub repos. So we, in the docs team, we use GitHub to make explicit this contract between maintainers and users that others distros live implicit and the contract goes as follows. Dear user, we hereby promise that everything you'll find is factually correct or recommended for as long as we offer it. So we embrace the fact that GoodDux is like a wet market. It's going to be fresh and you have people to serve you. You don't help yourself with your sticky fingers. All right. Joke aside, let's turn to the types of questions we have to address and have to anticipate when writing the docs for TumblrWeet. So first you have questions about facts. Shall I pick X or Y in the installer? So shall I pick option X or option Y if I want to do Z? If I'm booting to a black screen, how to use an NVIDIA GPU, how to offload my video output to NVIDIA, how do I get software X, how do I update or upgrade, etc. Those are matter of fact. And also, and that is going way beyond what you could possibly find in Leap documentation, there are matters of recommendation. And these are at the front stage of questions such as how often should I update? Should I update? Let me emphasize this. How often, how should I solve conflicts between dependencies? Should I use Zeper or Yost, Zeper or DNF, tool X or tool Y? So documenting TumblrWeet cannot do away with recommendations. We have to not only move away from the wikis onto something more sustainable such as a maintainer-centric repel, such as what you can find on GitHub, but also we have to take on board recommendation, best practices, things that you should do, not just things that serve a certain instrumental purpose or that just meet certain facts. So the takeaway is with TumblrWeet you have advanced users, not just advanced users, but even if they are not advanced users, they are interested, they are pushing their system and they are questioning their defaults. And you can't meet these people, meet their aspiration just with technical facts. In other words, TumblrWeet appeal to people eager to learn and to tinker with their system, so good docs are required to honor the promise that the risks that they are taking by using TumblrWeet are worth taking. All right, so let's talk about the obstacles we had to and still have to overcome. Just imagine for a second your user and you have problems with NVIDIA because guess what? People have problems with NVIDIA and that's very surprising, isn't it? So where should these people turn to? Well, they can turn to LeapOfficial documentation, to the Wikis, they can go to the forums and they can check out our friendly gecko at openSUSE-guide, very friendly person, very knowledgeable user, but the point is you have a lot of different sources and too many sources giving similar but slightly different tips published at different times. We don't want to control what third parties say about our bill of distribution, so we have to either work with them or to compete with them, so that, well, we just provide the best docs, so no need for other sources. Which we can't compete with them unless we present a single source of truth and recommendations to the users, which means, guess what? Removing redundancies. That means deleting or recycling in more pleasant words the work of other past contributors. And that's taking us a lot of time and that's one of the reasons why we are a bit beyond schedule as far as our object tips go. So another reason is, meeting our own reviewing ambitions is a tall order. Just to make things concrete, just check out our reviewing process. Whenever we have a new submission, first we review the structure and the contents of the text and then the language, the style and the punctuation. Absolutely boring and, I mean, very enticing and compelling activity, yet that's something we can do ourselves. We are the only cooks in the kitchen, everything goes nice and smooth. What about the two other things we've got to do? Well, we've got to make sure that we have peer reviewing on the contents. So we've got to review again the structure and the contents, we've got to review again the language, the style and the punctuation. And, I mean, to do these two last steps, we need to turn to experienced and knowledgeable contributors and even when they are flawless for technical facts, don't always agree to making recommendations. That is, in other words, they are not fully, totally convinced by the past couple of slides I've just offered to you. So we need to accommodate this. We need to push them, so to speak, to go a little bit beyond the facts and be a little bit more, I mean, less humble and less shy and put some realistic recommendation on the table. So either that, I mean, they're a little bit shy or they don't have time to explore recent use cases so as to make an informed recommendation. So that's the problem. We're working on docs at a time when there is not really a tradition of using such a reviewing process because, I mean, the community, the project, so to speak, is not used to this maintainer-centric model we're using here. And finally, the third reason why we're a little bit beyond schedule is the technology is more integrated than the people. And by this, I mean, you have very complex tools such as Yast Zeper snapper, which standard the core of the open-suzet user experience and they are very neatly integrated. Nothing to say about this, but you can't say the same thing about the people that work on them. That means there is a mismatch between the level of integration of these tools on the one hand and the level of integration or coordination of the people that maintain them. It's not a criticism. It's bound to happen in a world where you need to specialize. But what would be really helpful to us would be to have specific time windows or platforms where we can go to potential reviewers and, I mean, first identify them and then go to them and ask them if they would review stuff. There's progress. There's pager, two platforms. The first one has been around for a couple of times already. The pager is migrating surely, but we're migrating to pager. So that should mitigate the problem. That might not be sufficient. All right. Saving the world. Time for some silver linings. After all these obstacles and these pain we've been through. So first a good step in the right direction. We've been able to assemble a telegram bot that is supposed to make it easier for people to access the documentation even from within the context of a live conversation when they're asking for user support. So for instance, you have this user here. I mean I could just do a BDF or snapshot lol. I have actually been meaning to reacquaint myself with snapper. Nice words by the way. Good English vocabulary. And boom. Answer. Slash doc snapper and you have the bot sending a list of links. Each of which is relevant with respect to snapper. So it's searching the docs using snappers as a keyword. And apparently the user who asked the question seems to be satisfied. So yeah, you make user support a much more lean and simple experience if you can funnel people that have specific questions to specific answers available from the docs. And another thing that's been on my mind for some month now is we could imagine that we basically create new dynamics within the community to help not only documentation but also accessing documentation and reviewing documentation. So it's just a game of thinking here. Three conditions that describe this ideal situation I have in mind. So imagine that whenever a user finds a gapy or an outdated or incorrect contents they can report them to the documentation team through a simple procedure. So basically you'd imagine a button on a web page, a web page that would be serving the documentation. And then another condition would be whenever the docs team needs a reviewer they can broadcast their requirements to a waiting room of people interested in reviewing. So people who have made themselves available to reviewing. And these people can pick up the tasks they're interested in the most as they come. So it's like a pipeline. At one end of the pipeline you put contents that needs to be reviewed and on the other end of the pipeline you have knowledgeable people that can pick up the task and send back. And the third conditions would be imagine that all contributions, be writing the docs or be reviewing it is easy to trace to their authors and to reference from a curricular. So people that help us on GitHub, people that review the docs now or in the ideal scenario I'm painting at both strokes. All these contributions are traceable. Imagine how powerful an incentive that could be to people that are just not, I mean to people that would like to make use of their open source experience on a more professional level. So with these three conditions met you would have a situation where contributing would be even more rewarding to everyone actually. So the individual contributors would win more because their contribution would be more directly actionable. You would have reviewers that would also win in the sense that they could directly have some impact on the quality of the documentation about the packages and the software that they themselves make. So they would have more control and they would be able to have more. It would also act as a sort of feedback if you will because I mean the type of questions you have to answer gives you a clue as to what problems people have downstream. And also the end user would be just as easy as using a wiki to report and to signal some shortcoming. So yeah that's a scenario I think is we're not quite here yet but we are heading in the right direction. So conclusion, the documentation is a wild beast but it's also a very precious one. It's like a precious navigational instrument. It helps a community know where it's heading and it helps it remember where it comes from because it embodies a particular history and a particular tradition. So it's part of our DNA so to speak. It expresses the gene of the community in a particular way so just for that it's a beautiful thing. And used in conjunction with good feedback mechanism like the end of the year survey we're going to talk about later today. It makes for one of the most beautiful place you can contribute in open source. And that's it. Thanks for your attention and don't forget to check out our repository, our telegram group and also if you want to read these slides at your own pace you can do so from my GitHub app. See you later. Bye.
Technical documentation is like the box of bandages in your bathroom cupboard: you don't know it exists unless you actually need it. And as it happens, it's when you need it the most that you find it half empty... In this presentation we tell you everything you didn't know you wanted to know about the openSUSE documentation. You'll embark on a journey into the wild, from luxurious wikis to austere source-control platforms, where we'll try to widen your eyes on the importance of a central frame of reference, and on the numerous challenges standing in the way of discoverability and user-friendliness in the extremely rich and ever-evolving ecosystem of the openSUSE distributions. You'll be walked through the front line between users eager for technical facts and users eager for best-practices, and told about our approach, our ideas, and why you could actually have fun contributing.
10.5446/54695 (DOI)
Bonjour à tous, bienvenue à la conversation d'OpenSuzo en armes, à la conversation virtuale d'OpenSuzo. Nous allons voir ce qui s'est passé depuis la dernière conference de OpenSuzo de l'Empire de l'Empire de l'Empire de l'Empire d'Armworld. La légendaire pour aujourd'hui est la suivante. Je vais commencer avec une petite introduction de moi-même. Nous allons avoir un autre vue d'OpenSuzo en armes d'armes. Nous allons parler de la service d'armes, d'armes QA, aussi d'armes d'armes avec des micro-s, des libes et des projets de steppe. Nous allons avoir un petit mot sur OpenSuzo Wiki, et finalement, le to-do list. Je suis Guillaume Gardé, membre de l'Empire d'Armworld. Je suis ingénieur en armes, part de l'équipe d'OSS. Je suis délicaté à Suzo et OpenSuzo depuis 2018. Je suis membre de l'Empire d'Armworld, en prenant le care de l'architecture. Mon main de focus est sur l'air 64, le flavor 64-bit. Nous avons de support pour ces architectures en timbre, huile et de lait. C'est la même pour armes V7, timbre, huile et de lait. Mais pour armes V6, c'est timbre et plus en maintenance mode. Il n'y a pas de plus grand chose à faire sur armes V7. Nous allons avoir un petit overview d'OpenSuzo en armes workflow, pour comprendre comment s'éteindre et obtenir des nouvelles packages ou des updates en armes. Nous allons commencer avec un autre overview d'OpenSuzo workflow pour X86. Nous avons un projet appelé Factory en OBS, où nous construisons packages pour X86. Si l'OBS finit par construire, un nouveau snapshot est mis à l'OBS, et si l'OBS est gris, ou si nous choisissons d'ignorer des défais, nous pouvons relier cet snapshot à l'utilisateur, et nous appelons le Dumbledread. Si nous voulons éteindre quelque chose dans l'OBS, nous devons soumettre l'update pour l'OBS. Nous avons un processus d'automative review par BOTS et d'une réveil manuale. Nous avons aussi un test de prétention, avec l'OpenQA et quelques checkers installés. Nous devons vérifier si tout est bien. Si cette submission est acceptée, la package est updates en factory. Nous devons attendre pour le build finir, et un nouveau snapshot est mis à l'OpenQA, et cela sera révéli au utilisateur d'OBS. C'est pour l'OpenSuser factory et Dumbledread pour X86. Pour ARM, le projet n'est pas l'OpenSuser factory en OBS, mais l'OpenSuser factory ARM, qui est un lien avec l'OBS. Toutes les sources sont réutilisées et updates en temps réel. Nous avons juste un petit overlay pour la version snapshot pour ARM et le contenu de ISO et FTP3, qui peut s'affronter à l'OBS. Si vous voulez un projet d'OBS, vous devez le mettre à l'OpenSuser factory, et cela sera révéli au project ARM. Pour l'OpenSuser LEAP153, c'est un peu différent maintenant, car il used à être un projet de séparation pour ARM, mais maintenant, l'AR64 est directement inclus dans l'OpenSuser LEAP153. Il sera facile pour l'AR64. Mais pour ARM V7, nous ne sommes pas inclus dans l'OpenSuser LEAP153, donc nous avons toujours un projet sub-project qui est l'OpenSuser LEAP153 ARM, et cet projet s'appelle l'OpenSuser LEAP153 et aussi aux projets de séparation. Je vais donner plus de détails sur cela plus tard. Si nous nous sommes revenus au travail pour ARM, vous devez soumettre votre update à un factory, et les sources sont révéles en temps réel par ARM. En fin de construction, un nouveau snapshot est mis à l'OpenQA pour ARM, et si c'est de la grignolesse, il sera mis à l'utilisation et révéler comme un tambour-leveau pour ARM. Si l'OpenQA nous montre quelques blocs, le snapshot ne sera pas révélé à l'utilisation, et vous pouvez le fixer par une nouvelle soumission. Le prochain snapshot sera testé. Nous avons maintenant un mot sur l'OpenBuildService, le OBS. La bonne news est que nous avons plus de puissance de build pour ARM en OBS très vite, mais nous avons déjà une puissance de build pour H64 pour réévaluer les rins. Nous avons deux rins, un petit rin, le bootstrap, et un grand rin, le minimal X, où nous avons des packages de cores qui peuvent être rébuildées tout le temps. Le premier rin, le bootstrap 0 est déjà gris, mais nous avons encore quelques build failures à fixer en minimal X. On ne peut pas avoir une puissance de build pour H64, donc nous avons seulement un projecte de stage pour H86, qui a déjà trouvé quelques problèmes qui sont communs entre les architectures. Et bien sûr, depuis le dernier talk, nous avons fixé beaucoup de build failures pour ARM. Plus de build statistics de la semaine dernière. C'est pour OpenService et ThunderWeed, pour H64 et X86. Vous pouvez voir que les build failures et les risques sont pratiquement les mêmes entre les architectures. De l'opening QA, nous avons un local worker, qui est une machine d'H05, qui nous permet de avoir 16 chemis. Mais nous avons aussi des remotes workers, donc nous avons deux machines de web service Amazon. Nous avons une machine M6G, qui nous permet de avoir 10 chemis, qui sont capable de lsc. Nous avons aussi un part-time worker, qui est basé sur une Oncom LX2K, qui nous permet de avoir 3 chemis. C'est seulement un part-time worker, car il est aussi utilisé pour construire des packages localement, donc ce n'est pas en ligne tout le temps. Nous avons aussi des tests de hardware réel, grâce au machine Air64. Nous avons encore quelques valeurs de la semaine dernière, donc vous pouvez voir que nous avons beaucoup de tests pour ThunderWeed et Armour. Nous avons encore des notifications automatique, avec des changements principaux, qui sont envoyés à la fin de la nouvelle snapshot. C'est très bien d'avoir des changements principaux. Il y a beaucoup de packages, qui sont fixées, qui peuvent être à build time, en OBS ou à run time, grâce à OpenQA. Nous avons toujours de nouvelles packages, de nouvelles systèmes, de nouvelles supportes et de nouvelles features, grâce au kernel et à l'espace de l'utilisation. Nous avons aussi des systèmes qui sont mis au point de construire des projets de contrôles, de l'OBS, à la main-temple-read. Nous avons des features armes, donc vous pouvez aller au Wiki sur le support de l'armée d'architecture pour obtenir les informations. Nous avons déjà en train d'établir les atomiques, depuis la semaine dernière. Pour les identifications de point de l'identité et de l'identité de l'article PONGE, nous avons déjà ajouté des supports de l'armée de l'article de kernel, depuis juin et august dernier. Nous avons aussi aussi des supports de l'article de l'automne depuis novembre. Nous avons aussi des extensions de la main-temple, de l'article MTE, donc nous avons des supports de l'article PONGE depuis juin et des supports de l'article de kernel depuis juin et august dernier. Le support de l'article de kernel a été ajouté dans le flavor de l'article de kernel, grâce à l'article CASAN. Pour le LIP 15.3, la grande nouvelle est que l'article L64 est maintenant partie du projet LIP 15.3, ce n'est pas dans un projet séparé. La construction de l'article LIP est très simple, nous avons des packages de l'article LIP 15.3, pour les packages de l'article CASAN, donc GCC, kernel, KMU. Et puis nous avons des packages additionnelles, des projects de l'article PANGE. Et enfin, nous avons quelques packages à la top, surtout pour le branding. Et grâce à tous ces packages, nous avons le LIP 15.3. Bien sûr, durant le développement, beaucoup de packages ont été fixés, peuvent être à la time de construction ou à la fois de rentée. Et enfin, nous avons seulement 15 failures de build 4.h64 dans le package app. Pour le LIP 15.3, c'est un peu différent parce que nous ne pouvons pas être basés sur le RPM de SLEE, le repos de l'entreprise RPM, parce que le LIP 15.3 n'est pas supporté là-bas. Donc nous avons besoin de rebuilds de packages pour la V7 de SLEE. C'est fait dans les projets de la STEP, et les projets de la STEP sont seulement utilisés pour la V7 pour maintenant, mais ça pourrait être utilisé pour les architectures additionnelles dans le futur. Si vous voulez avoir plus d'informations, vous pouvez aller au Wiki sur le portal LIP slash l'OpenSuserSTEP. Je l'ai aussi lié à l'annoncement de support de la V7 pour le LIP 15.3. Si vous voulez le lire, vous pouvez aller au news.opensuser.org et vous allez trouver l'accent. La information de l'OpenSuserSTEP est pas encore réellement élevé, parce que c'est encore un travail en progress. En regardant l'OpenSuserWiki, la page principale de 3 pages est le portal ARM. Nous avons appris quelques pages dans le namespace ARM. La première est, bien sûr, le portal itself, le portal ARM. Et aussi, les pages de la compatibility hardware ont été updates. Et les pages de l'HCL sont maintenant proposées par la système en chip, ce qui peut être facile pour les gens à proposer. Et bien sûr, comme je l'ai dit avant, la page avec les statues de l'expansion de l'armée d'article est en train d'établir. Donc c'est un armes.architectures.support. Finalement, je vais avoir un mot sur la liste 2. Nous avons besoin de relier le LIP153 pour le Mv7, dès que c'est réellement élevé. Je n'ai pas de l'ETAE8. Nous avons besoin de continuer de l'improuvoir avec de nouvelles informations et d'informations à date. Si vous avez quelque chose à faire sur votre main, n'hésitez pas à le faire. Nous avons besoin d'improuver le OBS aussi. Nous avons besoin d'améliorer l'armes sur plus de projets de développement, pour attirer le failure dès que possible. Si vous détectez le failure de l'armes dans le projet de développement, probablement, ce ne sera pas le plus possible pour la production de l'armes. Si vous avez quelques packages dans votre projet de développement, n'hésitez pas à l'améliorer, surtout pour l'armes AR64. Nous pouvons encore l'improuver au QA, probablement, ajouter plus de tests AR64. Nous pouvons également éclater l'adverte de l'adverte de l'article, qui est utilisée pour l'adverte de l'article réel, qui est principalement pour le Raspberry Pi 2, 3 et 4. Les idées principales sont d'adverter du support pour l'AMG à des devices USB, qui permettent de vérifier l'output de la machine. L'output est présentement testé par une connexion de connecteur de la nette. Nous pouvons également ajouter le support de l'USB, pour envoyer des mous et des événements de la base. Nous devons continuer de monitor, de construire et de tester les défis, donc nous devons fixer les défis dès que possible. Nous devons réparer nos fixations sur l'opinion de la bugzilla, qui peut être réalisée en upstream aussi. Nous avons besoin d'aide pour tester et obtenir des systèmes de feedback. Il peut être des petits portes ou des grands services. Même si tout est bien, s'il vous plait, reportez les statues. Vous pouvez nous aider à modifier des softwares aussi, pour ajouter de nouvelles features. Par exemple, nous avons ajouté le Vulkan pour mesurer la package récemment pour Armour. Nous pouvons aussi avoir des aides sur WSL support en OBS pour AR64. Nous avons un projet de travail progress, dans mon espace de nom de la maison. Si vous voulez l'aide, s'il vous plait, vous pouvez vous acheter. Si vous voulez rejoindre la team de armes, vous pouvez le faire sur ARC. C'est toujours le channel OpenSuser-arm, mais nous avons avancé de 3 nodes et nous sommes maintenant sur Libera.chat. La liste de mail a été updates, donc n'est pas OpenSuser-arm à OpenSuser.org. Merci pour votre attention. Si vous avez une question, s'il vous plait, vous pouvez le réchauffer.
This talk will cover the past year for openSUSE on Arm, mainly focused on AArch64, but it will also cover armv7 and armv6. At the end, we will have a quick look at the future and where the community could help.
10.5446/54706 (DOI)
tuned are So hello, good morning everyone. So I'm going to give some lectures about topological recursion. Well, first what is topological recursion? I like to illustrate it by this picture. And well, this is just a picture. And in fact, it's, we would like to find really a geometric understanding of that picture. But so topological recursion is just a way to compute certain quantities. And that are useful in practice. But it's still something in progress. And there are still lots of things to understand. And what is really truly amazing is how is that it works. It works very often in many situations. So this is my plan. I will start by a very general introduction with some example. And then I will give the definitions of what I call spectral curves, topological recursion. I will start to give the first few properties and explain how to do computations and related to the moduli spaces. And mentioned that there is a recent approach by Konsevich and Sobelman, which is slightly different. And the link between the two approaches I think is not totally well understood. And also then the third part will be about studying deformations. And that's really where you have integrable systems. And then I will give some applications. Some of them are conjectures like the applications to knot theory. And so this will be depending on how much time we have. So first, what are we talking about? Topological recursion seems to have a lot to do with mirror symmetry. And mirror symmetry has two sides. One side is what is often called the A model, and which is in fact enumerative geometry. And on that side, the goal is that we have a certain space, moduli space. So I'm very general at this moment. We have a certain moduli space, let me call that MgN, that depends on some parameters. Let me call them Z1, ZN, whatever they are for the moment. They are just moduli, which is a space of typically remain surfaces with N boundaries of genus G and boundaries Z1, Z2 up to ZN. So it's a set of surfaces decorated, of genus G with N boundaries, decorated with some moduli. There can be more moduli that I didn't write. Well, whatever it is, and the idea is that we would like to compute the volume of that space. So the question which we would like to answer is, so let me call that WGN of Z1, ZN, would be the volume of that space. So this means that we need to have defined kind of form that we can integrate, volume form that we can integrate. It can be a simplectic form, it can be a measure, it can be whatever you want, but we want to compute the size of that space, and it will be a function of Z1, ZN. I missed it maybe something. Is MgN is compact? For the moment it's totally abstract thing, it's no, it does not need to be compact or whatever, it's just a space, it's just a very general introduction of the spirit of what we want to do. We have a certain space, and we want to integrate a form over that space, and we assume that somehow this makes sense. Case by case, we will have something precise, either compact or with some function that decreases that infinity or whatever, or sometimes it will be a discrete space, for instance the space of triangulated surfaces, it's a discrete space, so the sum is in fact a finite sum for instance or something like that, and you can have grating to make it a formal series or whatever. So it just, we want to compute those volumes, and the idea of topological regression is imagine that we have a certain structure, imagine that we have a certain structure such that you can compute them by recursion. So imagine that knowing only the W01, so which is the disk, so by the way I will call this the amplitudes, it will just be a name for the moment. So if you know the disk amplitude on the cylinder, so if you know the disk on the cylinder, imagine that you have a recursive procedure that allows to compute all the others, and that corresponds somehow to that picture. So then it's possible to compute WGN by recursion on this number which is 2G minus 2 plus N which is a higher characteristic somehow. So imagine that you are able to compute those volumes by recursion on the Euler characteristic. I will give a precise example later, sorry. 2G minus 2 minus N, it's a pretty simple surface. Okay, so the Euler characteristics of the objects that are inside the space. And the I's are complex numbers, not, not, for the moment they are whatever you want. If they are circles, those numbers, topologically, sorry, no, for example, in Granff-Litton series, you can see the model space, of course, which intersects some cycles, yeah? Yeah, yeah. It's all the number. For the moment, I'm not saying what they are. Okay. So you're saying that volume is kind of Euler characteristic. She wants. According to the definition. No, no, no. No, W is no. But I didn't say what is the measure that you integrate or whatever, so it's something very abstract. It's just some notation, it's just, I just want to give the spirit, but later I will give a very precise example where you will see what happens. So now we have the B model side, which has to do with complex, with complex structures, complex curves. Well, let me call it complex curves. And you have something that seems to be totally unrelated to that. It seems to be totally unrelated to that. So to give me, to give an example, so imagine that you have a certain algebraic curve, given by its equation P of x, y equals zero. So curves here are always embedded in surface, yeah? Your curves are always embedded in surface, are not? For the moment, it's just a polynomial equation P of x, y equals zero. So I mean, I'm in fact considering an immersion of a remand surface into C cross C, for instance. So it defines an immersion of a remand surface in C cross C. This will be just a simple example of what I'm going to call later a spectral curve. So basically we have, so if you plot, so you have something like that. There infinity is not included. For the moment, it doesn't matter for what I'm going to say. So you have an object which is a complex, which is something with a complex structure, which is two dimensional, which is on that curve, you have one form, which is just y dx. Again, it does not really matter. For instance, here I plotted, there is a nodal point. Can it have any importance like x very quickly zero? Yes, it can. And what is one form on this thing? Sorry? Ah, so it can be sub schemes, could have any importance. Can be, well. In fact, it's not the way I'm going to define it later, but it just let me just to fix the idea for the introduction of what we have. So on that side, we have some object with a complex structure. And we have forms on it. We have one form, which is y dx. Let me call it omega zero one. It's a one form. So that's the remain surface, let's call it sigma. Okay? And sigma is immersed by, so each point of sigma as an x, and so each point here, x, y, as an x projection and a y projection. And there is a one form y dx that lives on the curve. It's a one form. It's usual to choose also two form on, so this is a one form on sigma, on sigma cross sigma. And we shall take it symmetric. And it will be in fact a tensor product. Let's call it omega zero two. I'm not going to say much more about that for the moment, but it's a symmetric one form on sigma cross sigma. Sorry, a symmetric two form on sigma cross sigma. I will say more about that later. And then we define. And also, should say it has fallen. Sorry. I will say that later. Okay. I'm not saying what it is for the moment. I'm just saying you have some meromorphic forms. Define, yes, meromorphic forms. Define by recursion on again two g minus two plus n. Some n forms, some meromorphic. On sigma to the power n. The assumption of meromorphic is on two form or only on two form? No, this one is obviously meromorphic. This one is obviously a meromorphic also because y and x are meromorphic. So all of them are meromorphic. And typically by computing residues. So I will write all the precise definitions later. But for the moment, what I just want to say is that on one side you have an enumerative geometry problem. On the other side you have a complex curve. Those two problems seem totally unrelated. And what we want to see is if, for instance, for a given type of space of enumerative geometry problem, is there a certain complex curve for which that computation would give exactly the same quantities as here. Or vice versa, given a complex curve, is there a modular space such that those quantities that we are computing here, so defined by recursion, so we call them omega gn, that will be some residue of something. I'm not going to write what it is. So are those forms somehow related to those amplitudes? And surprisingly, the answer in many, many different types of problems, the answer is yes. And the idea is that on this side, the computation is quite easy. You just have to compute residues. You can put that on software and that computes it automatically for you. You just have to press the button and it computes. So it's quite easy and it gives you answer to the other side. And it's very surprising. So what I will show you in this lecture is, what I will show you in this lecture is, is that given what I will call a spectral curve, so which is somehow a complex curve with some extra structure, this embedding into C cross C and some extra, a little bit extra structure. So let's call it S. We shall build a modular space, MgN and a homology form, a total logical form, but I will call lambda of S, such that the omega gN are indeed some integral of our MgN of this lambda of S times some, let me call that Psi, Z1, Psi hat of ZN, but also some homology classes. Psi hat of Z, but also some homology classes and they are typically related to shunt classes of line bundles. So we shall build them explicitly. Line bundles and what on the spectral curve or the, or even surface? No, line bundles on the modular space. So, Z are points of the spectral curve. So it just to give a very general idea of what we want to do, we want to relate two things that seem totally unrelated. A problem of enumerative geometry and the problem of computing some forms on a curve. And parameters might be logical, Z corresponds to Z, or there will be some mirror transformation. There could be some mirror transformation. So, what do you mean by that? Well, okay, let me do an example. So let me do an example, so which will be my one, two example, Mirzharani's recursion. So, for the moment I didn't give any definition, I just try to give a kind of general idea of what we want to do. So now let me consider MgN of L1, LN is the modular space of hyperbolic, so of surfaces. Well, okay, let me call that Sgn and omega, where Sgn is a surface of G-nus-G orientable with N boundaries. Obviously, N will be always positive. Sorry. N will be always positive. Here it can be zero. And to publish the question as well. Sorry. And to publish the question also. Yeah, okay, there will be some, okay. And omega is hyperbolic metric. So this is the space of hyperbolic metrics on Sgn, such that, so hyperbolic means curvature, the curvature is constant and equals to minus one. And such that the boundaries are geodesic of lengths L1, LN. And you modulo isomorphisms, which are in fact isometrics. Okay, this is some very well studied space. And for instance, if you take an N1, in M13, you will have something like that. Surfaces like that with three boundaries, L1, L2, L3. And it's well known that there is a space of coordinate, a set of coordinate Fentrell-Nielsen coordinates that are defined as follows. It's possible, so every such surface can be cut by geodesics, by closed geodesics. So these are closed geodesics. In fact, you can check that there are always 3G minus 3 plus N such closed geodesics that cut the surface into pairs of points. So here you have three pairs of points. This way of finding geodesics that cut the surface into pairs of points is not unique. Sorry, I didn't understand. What do you mean by two pairs? By what? No, I didn't understand the word. Pulse. Pair of points. It's 0, it's 0 minus 3. So somehow you have one troser, one troser, one troser, one troser. Okay, if you glue them together, you can reconstruct such a surface. If you glue them along their geodesic boundaries, you can glue them only provided that they have the same, that the boundaries have the same geodesic lengths. So the coordinates that you will use are the lengths of those geodesics, L1, L2, L3 for instance here. But that's not sufficient because a pair of points has, there are some special points on the boundary. There are some special points on the boundary. And when you glue together, you don't have to put the points at the same place. You can rotate by an angle. Yes, you can rotate by an angle. And in fact, so if you record all the gluing angles, okay, so the, so the Fentanyl set coordinates are the, the, all the, all the geodesic lengths on the gluing angles for I equals 1, 2, 3g minus 3 plus n. They are coordinates of that space. So meaning that for every such gluing, so if you fix the length of all those geodesics and some gluing angles, you find an element of that space. On vice versa, every element of that space is locally uniquely, can be locally uniquely, can be locally uniquely recovered by that. And the reason is that if you fix 3g of these gluing lengths, there is a unique pair of points hyperbolic with those 3g of these gluing lengths. So these are local coordinates. They are not global coordinates because there are different ways of cutting the surface. So for instance, the same surface or just let me give you a very simple example. This surface, so in M04, you can take this cutting or you can take that one. You get two different lengths on coordinates, but they represent the same point, both represent the same point in the modular space, but you have different coordinates because it's only local coordinates. However, what's Vile proved, Vile-Peterson proved is that the two forms summed from i equals 3g minus 3 plus n dLi, which d theta i. It's called the Vile-Peterson form and this form is well defined over the modular space and is independent of the choice of coordinates. It's independent of the choice of coordinates. So you can see that the two forms are the same. So you can see that the two coordinates is independent of the way you cut. It's Vile-Peterson form. Form. It's a two form, so let's call it omega. It's a two form. Well, it is also sometimes denoted as 2 pi square kappa 1. I'm not going to say what is kappa 1 for the moment. It's called the Memphold class. So you actually compactified it. Otherwise kappa 1 comes. Can you take my point? No, you can restrict the open part. Okay. Okay, it's just an notation for the moment. So this one form, sorry, these two forms allows to compute a volume. So this is a very old problem. And where is the eraser? Where is the eraser? So here the Li's are real numbers, positive real numbers. Remember the Li's here. Li belong to R plus. We are positive real numbers. Okay. And what we would like to compute now is the volume. Vgn of L1 Ln. We would like to compute the integral over Mgn of L1 Ln of these two forms to the good power such that it becomes a volume form. And the power is, well, let me call that dgn. And dgn will be just a notation for that number which will come everywhere, 3g minus 3 plus n. Okay. So you see that this is the dimension, sorry, 2 times dgn is the dimension of the modular space. It's the number of coordinates. And so if you raise these two forms at this power, it's a maximal dimension form. And divide by 1 over dgn factorial. So that's the volumes you would like to compute. And it's not so easy to compute them while some of them are quite easy. So as I said, M03, remark that M03 of L1 L2 L3, I said there is a unique power of point. So it's a point. There is a unique element in that space. Okay. So by definition, you will say that the volume, vgn of L1 L2 L3 is 1. Okay. More difficult. People have computed v11 of L1. And the answer is 1 over 24 2 pi squared plus one half of L1 squared. What is that space? Sorry. What is the surface? So it's something with one boundary on genus 1. So it's something like that. So one possibility, for instance, is to cut here. Okay. And the volume is that. Okay. It's what it is. Another example, v04 of L1 L2 L4 is 2 pi squared plus one half of L1 squared plus L2 squared plus L3 squared plus L4 squared. You see that it is always a polynomial of the LI squares. It's not obvious at all from the definition, but it's always a polynomial of the LI squares. That's what you get after computing. This has been computed by a method of hyperbolic geometry. It's quite complicated. And the idea is there an easier way to find the same quantities. Well, first, in fact, instead of, so in 2004, Maria Mirzani discovered a recursion relation to compute all those volumes by recursion on 2g minus 2 plus L. And that's why she got the fifth middle, basically. And it was only three laws to compute in a rather easy way all those volumes by recursion. Excuse me. The formula depends on the way we have to compute. In fact, Mirzani's method is somehow to make the sum of all possible ways to cut and somehow divide. It uses what's called the Max-Chain formula. I don't want to enter the details, but her proof is a long proof. And somehow the way it works is you have to take into account all the possible ways to cut. Okay. Somehow avoid double countings. So I'm not going to write Mirzani's recursion for the volumes m vgn. I'm going to first Laplace transform. So vgn depends on the way we cut. No. No, no, no. No, no, no. No, no, this is the definition. So the definition is you compute the volume with the Ville-Petersen form. The Ville-Petersen form in a local patch of coordinate can be written. It depends on the way you write it. It depends on the way you cut, but the form is in fact independent of how you cut. So the volume form is independent. The volume is independent. So this space has a certain volume and you want to compute it. And it's hard. And the reason why it's hard is precisely because the local coordinates do depend on the way you cut. But it's only local. So let me first Laplace transform and define wgn of z1, zn is just integral from 0 to infinity of L1 dL1 e to the minus z1 L1. So you just Laplace transform. Okay. Oh, sorry. It was here v0, 3, of course. So if you do this Laplace transform, it's easy to see that you can do this. So you can do this. So you can do this. So you can do this. So you can do this. So you can do this. So you can do this. So if you do this Laplace transform, it's easy to see that w03 of z1, z2, z3 is just 1 over z1 square, z2 square, z3 square. We just compute the Laplace transform of 1. Basically. W11 of z1 is, let me write it this way, 1 over 24 times 3 over z1 to the power 4 plus 2 pi square over z1 square. This is to the power 4. Another example is w04 of z1, z4 equals 1 over z1 square, z2 square, z3 square, z4 square times 2 pi square plus 3 times some from 1 to 4 of 1 over z1. So if the volumes are polynomials in the Li squares, it's quite easy to see that the Laplace transform forms are polynomials of the 1 over zi squares. It's kind of obvious. And so if you write Mirzarin's recursion for the volumes and Laplace transform it, you will get a recursion for the WGNs. And that's what we did with my student, Orantin, in 2006. So we just Laplace transformed Mirzarin's. So Mirzarin's. So this is the CRM by Mirzarin in 2004 plus Laplace transform that we did with Orantin in 2006. So this is just Laplace transforming Mirzarin's recursion. And in the WGN, what do you get? You get the WGN of z1, zn. So this is for n larger than 1 equals residue when z goes to 0. Let me write it this way. z1 square minus z square 2 pi over sine 2 pi z times WG minus 1 n plus 1 of z minus z z2 zn plus sum g1 plus g2 equals residue. And let me write it this way. I1. So this means you should take the set z2 to the n. You should split it into two complementary subsets in all possible ways. And WG1, 1 plus cardinal of I1, z I1, WG2, 1 plus cardinal of I2 minus z I2. And that's it. And let me put what I will call sum prime here. And sum prime means that you exclude from the sum, exclude the case where g1, I1 is 0 and empty ensemble. And the case where g2, I2 is 0 and empty ensemble. So you exclude those terms from the sum. Let me show you that it's, so I cheated a little bit. A residue is in fact an integral. A residue is an integral. It does not mean just picking a coefficient in a series expansion. Very often people don't write the integration variable in residues because somehow, very often, this is assumed. I mean, you have no, there is no ambiguity on what is the integration variable. And you, very often people don't write it. But it should be written. You can only compute residues of forms, not residues of functions. And the residue is an integral. And this is particularly useful to write it when you are going to make changes of variables. Otherwise, you forget the Jacobian. And it changes the residue. And this is the residue. So, Ant provided that we define, so which is not a volume. Sorry, all those volumes were defined only, so hyperbolic volumes are defined only for 2g-2 plus n, strictly positive. So, for instance, 0, 1 or 0, 2 are not defined. There is no m01 of hyperbolic surfaces. There is no m02 of hyperbolic surfaces. There is no, so which means that gn must be different from 0, 0, 0, 1, 0, 2 and 1, 0. These are the four cases that are not, where hyperbolic volumes are not defined. And they are called the unstable. Well, if you have a surface with a constant curvature minus 1, the Euler characteristic is, sorry, 2 pi times the Euler characteristic is the area of the surface. Sorry, it is the integral of the curvature, so it must be negative. So, for a hyperbolic surface, the Euler characteristic must be negative, strictly negative. The surface is not defined. Yes, yes. The whole modular space is not defined. So, mgn is not defined. So, for this, mgn is not defined. It does not exist. Would you care to tell me, I did not understand this, they see you, but you said integral. So, I'm going to give, I'm going to do the example of computation. So, sorry, I didn't say provided that we define w02 of z1, z2. So, we define it, it's not a volume, but we just define it as 1 over z1 minus z2 to a square. Let's define it that way. So, let me, let me do the computation. So, let me, example of computation, w11. So, let us apply the formula. So, z is a new variable. Do you want to put d z1 and d z2? Sorry. W02, do you want to? No, not yet. Not yet. For the moment, these are just functions. Later, I will define forms. I will multiply by d z1, d z2, and so on to make them forms, differential forms, but for the moment, it's just functions. So, d z times z1 square minus z square times 2 pi over sine 2 pi z times, and what do we have in this bracket? In this bracket, we have, well, here, g equals 1, n equals 1, that means you want to put w02 of z on minus z. Plus, on the priori, there could be this big sum. On this big sum, you would like to have g1 plus g2 equals 1, and i1 equals, well, equals the anti-set. That's what you would like to have in that case, in that sum. And you see that either g1 equals 1 on g2 equals 0, or g1 equals 0 on g2 equals 1. So, which means that all the terms that could arise in that sum are excluded terms. So, in fact, there is no extra terms. So, this is just what you have. Excuse me? No, no, no, no, no, I started with z2. So, z1 is here, and here, that big term contains only z2 up to zn. Somehow, it's related to that picture here. 1 remains on the left side. So, 1 seems to play a different role from the others. And what is not obvious at all, indeed, is that what you will get in the end will be symmetric in all of them. It's not obvious at all from the picture or from this residue computation. It's not obvious at all that what you get is symmetric, but it will always be symmetric. So, let's compute that residue. Sorry, and the residue is taken at z equals 0. So, if you want to compute a residue, you have to compute the Taylor expansion near z equals 0. Well, first, this one, so let me write it this way. So, 1 squared minus z squared, 2 pi over sine 2 pi z. And this one is just 1 over z minus minus z, so it's 2z to the power 2. So, it's 4z squared. I have maybe made a mistake. Sorry, it was just pi, not 2pi. Okay, so let me put that 4 in front. So, it's residue, so let me put that 1 over 4, residue when z goes to 0, dz. So, here you have a z squared coming from there. Okay, this one, let me write it as 1 over z1 squared plus z squared over z1 4 plus o of z4. So, that just this 1 over z1 squared minus z squared, Taylor expanded at z equals 0. This pi over sine 2 pi z is just 1 over 2z minus 1 over 6 2pi over 2pi squared z squared. Well, let me put the 1 over 2z in front, so 8z cubed. So, plus o of z4. Okay, so this is 1 over 8 residue when z goes to 0 dz over z cubed times 1 over z1 squared. Well, let me put the 1 over z1 squared in front. So, 1 plus z squared over z1 squared plus o of z4 times 1 plus, so 4pi squared over 6 z squared plus o of z4. Okay, and the residue picks the coefficient of 1 over z. It's quite easy to see, it's 1 over 8z1 squared times 1 over z1 squared plus 2pi squared over 3. So, this is the end of the computation. So, you see, very easy, you can put it on computer and it computes automatically every WGN you want. In a finite number of steps, it's quite easy, the first two are doable by hand in just like four lines. Okay, and indeed, it gives the correct result. So, you can check that this is equal to that. It's not very important, but I think you can reschedule everything by pi, and then you don't have pi in the form. Well, the Vylde-Petersson volume form is defined this way. Yeah, yeah, yeah, you could, of course. Yeah, yeah, of course. Of course, there is a lot of homogeneity properties in everything, but that's how things are usually normalized. So, this is what you find, and this is the correct result, and I encourage you to compute 004 by the same method, and you will find the correct result too, and anything you want, you will find the correct result by this method. And you see, it's very easy to use in practice. So, this Mirzharani's recursion somehow solves the problem. It allows to compute all the volumes by a very simple recursion. And if you, so, in fact, Mirzharani's recursion was written in the real length, and it's an integral, but written in Laplace transforms, it becomes a residue, and it's in fact easier to compute. So, in general, it's not the volume of something, it depends on the Laplace transforms of volumes. Yes, somehow. But it's, again, another kind of surfaces with some decoration. You have a motivation for this formula that you pulled out of a hat, for W02? Yes or no? If I would have said zero, if it's not defined, then the volume has, you know, it's a logical way to define it as zero. Good question. No, somehow, this is what works. And also, just to tell you, for the sine 2 pi z, initially, so, initially, well, so when Mirzharani's recursion was found in 2004, I had just found a recursion for computing the large N expansion in matrix, in random matrices. And the two recursions were a little bit similar, and Laurentin told me, look how similar they look. There must be a way to match them. And what function should we use? And initially, we didn't know we should choose the sine function, sine 2 pi z. And we made a lot of tries, and somehow we say, okay, let's take an arbitrary function like sum of tk, sum of tk zk, instead of sine 2 pi z. And we computed the first 2 tk such that they match, so we found 1 over 6 times 2 pi to the cube. The next one was 1 over 5 factorial times 2 pi to the 5 and so on. And we went up to order 15 before we decided, well, we should try the sine function. And then it was very easy to prove afterwards. It's just a plastic transform. But so it's not obvious at all you have to make some guesses. And it corresponds to the first, to what I said in the introduction, given an numerative geometry problem, you have to guess a complex curve that will give the, for which the computation of topological recursion will give the same answer. You have to make a guess, and it's not obvious at all what should be that guess. There is no general recipe. What's the complex? Well, somehow this is this function. Sine 2 pi z. And I'm going to say it in, so, and it's not algebraic, of course, in this example. So now let me go to the, let me see how to define a generalization of that formula. So what can we generalize in that formula such that it can be applied to other cases? So let me start again. So, no, is it okay? Not yet. I'm just going to, not yet. I'm just going to say, okay, this is Mirzarin's recursion. And you see that there are some ingredients that are very specific to computing hyperbolic volumes. And, and for instance, what else could we compute? So how can we generalize this formula so that it computes some other things than hyperbolic volumes? Not only hyperbolic volumes. As I said, initially, the way we, the way we found that with Oranta is because we had a very similar recursion relation in random matrices. But in random matrices, this sine function was something else. It was not the sine function. It was something else. There are other few things that were different. Also, this denominator was a little bit different. But the same structure, this bracket here was exactly the same. Well, not exactly here. In fact, this function was not minus z, but something else. And the resues were not taken at zero, but at other points. So somehow what we tried to do was to find a common way to write the formula, a formula that would contain the Mirzarin's case and the matrix model case and other cases too. For instance, one which was found for Havitz numbers. And is there a way to write a general formula that could match for several examples? And let me replace things. So the first step will be to, in fact, what we had realized that in fact, when you do changes of variables, the WGNs do not really transform as functions of z1, z2, zn. They transform as differential forms. So in fact, it's useful to define some differential forms. So define differential forms. So omega gn of z1, zn will just be Wgn of z1, zn times dz1, dz2, dzn. In fact, this is a tensor product. But very often I will forget to write the cross. It's a symmetric n form. So this is a symmetric n form on a surface which will be, so here the zi's are just complex numbers. So it will just be on, so, and that I will call sigma. So on sigma to the n. So it's a tensor product, meaning that it's a one form in the first variable, it's a linear combination of one forms in the first variables, whose coefficients are one forms in the second variables, and so on. So it's just a tensor product. It's not the exterior product. And it's symmetric. So first we turn the Wgn to differential forms. So let's take this equation and multiply by dz1, dzn on both sides. So here, for instance, we shall multiply here by dz2, dzn, and dz1, let me put it here, because dz1 appears only there. So on the left side, by multiplying by this, we have turned that into the omega gn. On the right side here, this is not yet exactly the omega g minus 1 and plus 1, because we have the dz on the minus dz that are missing. So let's multiply by dz on minus dz and divide by dz on minus dz. So now, if we, so the denominator 1 of dz on minus dz, so minus 1 over dz to the square, let me put it in front. And you see that this is indeed the omega. And same thing here, it turns everyone into omegas. It may seem strange to have a dz in the denominator. Well, first, observe that here we had a dz in the numerator that cancels one of them. So, but remember now that this contains a dz on the minus dz, so it's a quadratic form, divided by dz, this is a one form. So it makes sense, this is a one form, you can compute the residue of a one form. It makes it strange to have a dz in the denominator, but remember that you have two dz in the numerator. Okay, so this was the first step, turn everyone, so now of course omega 0, 2 becomes dz1, dz2. Now, observe the following property, which is integrate from minus z to z, integrate omega 0, 2 of z1. And the variable that you will integrate, let's give it a name, z prime equals minus z to z, z prime. So it's integrate from minus z to z of dz1, dz prime over z1 minus z prime to square. Okay, so the dz1, let's put it in front, it's spectator. Okay, you can put it here, and while this is just 1 over z1 minus z, minus 1 over z1 plus z. Okay, and this is just so dz1 over z1 square minus z square times 2z. So let me write it this way, dz1 to z, so it's this integral minus z to z of omega 0, 2 of z1. And let me put a dot for the variable which is integrated, I'm not going to write it. So which means, let me replace that quantity by this one. So here I will write from minus z to z of omega 0, 2 of z1 divided by 2z. In fact, this dz, let me put it here. Let me put minus in front. Okay, so it's just a rewriting for the moment, it's just a rewriting. Now let me introduce another quantity, let me introduce two functions. So, x of z, what will just be z square, and y of z, that will be minus sin 2 pi z over 4 pi. Okay, why not? Observe that dx equals 2z dz. Okay, first thing, it's the same quantity that appears here. And second thing is that it vanishes at z equals 0. Let me call this point A. Okay, so, and another property is that x of minus z equals x of z. So the function sigma of z equals minus z, so sigma is the function that maps z to minus z, it's an involution. It's an involution such that x of sigma of z equals x of z. Okay, so let me now replace everywhere where I had this minus z, let me replace it by sigma of z. So the actual, the spectral curve or on the c, means this. On the spectral curve and the name of the whole term. So the spectral curve from the moment I have not really fully defined what is a spectral curve, but basically the space where z lives, z. So this variable z, it lives for the moment in the complex plane. And the spectral curve somehow is the complex plane plus some extra structures which are those functions, those three functions are defined on the complex plane. And somehow the spectral curve will be the data of all that complex plane plus a function x and a function y and two on the form omega 0 2. So this will be what I will call a spectral curve. It will be the data of all those things later. So, and so when I take, so here I said that I will write that as dx of z. Okay, the reason why I take it at the point where dx vanishes, so which is a, so where such that dx of a equals 0. Okay, and so let me put the minus here. Let me observe that this denominator is nothing but 1 over 4 y of z. Okay, this 4 let me write it this way, 2 times 1 half. Sigma also vanishes at 0, s, a. In fact, sigma of a equals a. Sigma of, a is the fixed point of sigma. Sigma of a equals a, which is 0 in that case. So sigma is indeed the involution that permits the different branches that correspond to the same x. So what will happen is that the map, so you have the complex plane, sigma will be c, it's the complex plane. Okay, and you map it by the function x also to the complex plane. So, but somehow this is a curve and this is a base curve and x is a projection from one to the other. And the points where dx vanishes are the branch points and also sigma is a local involution that permits the different sheets. So yes, sigma permits the branches, the different branches of the covering. So that will be the new one. Here in that case there is only one. So for this function x, there is only one fixed point. But indeed we decided to introduce those generalizations because for matrix models, typically you have two fixed points. Or more, or more than two fixed points. But in matrix models usually there were more than one fixed point. So then we shall sum over all a such that dx of a equals 0. So you get the Euler characteristic? No, not this. You don't have this type of theorem. Yeah, something with fixed points. It's not complex. Okay, it's not complex. So just let me say that here I wrote 2 times y of z, but let me be more subtle and write it this way. Y of minus z, which is sigma of z. Okay. Let me do it this way. So in that case, y is an odd function. So it does not change anything. But there are many examples that we shall consider where y is not an odd function and making this difference is crucial. So let me now save the following. So now choose. So it will be the definition of my... For the moment what I've done is just... no, sorry, let me... actually there is one more step. Let me... there is one more step is defined. Omega 0 1 of z equals y of z dx of z. So it's a one form. So basically omega 0 1 is the form y dx. Let me define this. So let me put this y together with this dx. Observe that dx of sigma of z equals dx of z. Observe that you have that. And so now replace this. So put the dx together and here define omega 0 1 of z minus omega 0 1 of sigma of z. Okay. So you see that now in this way all the ingredients you need... So it's just a rewriting of Mirzani's recursion. But in a way that we could hope to apply to other choices of x, y and so on. Okay. So now let me... let us make a true definition corresponding to that. So what are the ingredients we need? We need to have omega 0 2. We need to have a function x that realizes a covering of, let's say, a surface, a remand surface by another remand surface. You need to have a covering for which you have that as branch points and for which locally there is an involution that permutes the branches that cross at the branch points. Okay. So we need that. We need a one form, omega 0 1. We need an omega 0 2 and you see it was quite important that it had a double pole. So it needs to have a double pole. It needs also to be symmetric. And apart from that, that's more or less all what we need. Another remark is that since we are going to compute residues, all what we need to be able to do is to compute Taylor expansions near the branch points. Basically everything which is far away from the branch points does not matter at all. So in fact, and also residues will pick a finite number of terms in the Taylor expansion. In fact, if you have a true convergent, so if you have a true analytic function whose radius of convergence is larger than zero, or if you just have a formal series, does not make any difference in computing the residue. The residue only picks a finite number of terms. So if your radius of convergence is zero, that does not matter. You can still compute this. So in fact, what we shall generalize, y will not need to be a function of z, a mermorphic function of z. It will just need to be a formal series of z, a germ of analytic function. You don't really need x and y. No, in fact, you need x and w01. And indeed, you don't really need x. You just need the involution. You just need to know that there are branch points and involutions. In fact, that's all what you need. But for my, I prefer to introduce really a function x. There has been a lot of debates about that. Do you really need a function x or just locally? In fact, what you really locally need is a kind of polarization procedure. But okay. So somehow a function x is a little bit too much. Sorry. Because when I will consider the deformation theory of all that, I like to consider the deformation of the moduli of a function x. But okay, I mean, this is not fully established. I think maybe there are still probably improvements that can be made. But so let me define what I will call a spectral curve. So 2, 1. So this will be my part 2. Two definitions. And so 2, 1 will be spectral curves. So my definition will be that a spectral curve, s equals spectral curve, will be the data of a remand surface. So omega 0, 1 and omega 0, 2. So a spectral curve will be the data of four things. So sigma is a remand surface. What's in fact people like to call local remand surface. Means that it does not need to be not necessarily connected, but it can be compact or connected. Typically all what you want is that it contains some vicinity of a branch point. So all what is needed is that it contains some vicinity of a branch point. So it can be just a union of small disks. It can just be a union of small disks that contains a branch point. And whether there is a general, whether all those disks can be put together into a curve, does not matter at all. Sometimes they can, sometimes they can't. And if they can, that means basically that the mirror in your enumerative geometry problem is really a curve. If they can't, it means it's not a curve, but typically higher dimensional space. Well, okay, let me not insist on that, but what you need to run the definition of topology curve, to run the recursion, all what you need is that it contains some small vicinity of a branch point. That's all what is needed. So second thing is that x is a map from sigma to let's say Cp1. And such that dx has a finite number of simple zero. In fact, you can generalize this notion of finite numbers. If you have a way to take sums, so for instance if you have some gradings or for instance introducing a Q parameter, so if you have a way to define sums of infinite numbers, then it's possible to get rid of that assumption. And the fact that they have simple zeros, it's only for the moment, I will later give the definition when the zeros are not simple. So let me for the moment say that this is a regular, regular spectral curve. So a regular spectral curve means that the zeros are simple. So if A is a zero of dx, there exists local, sorry, and it must be holomorphic. Or let's say meromorphic. There exists a local involution, sigma A in a vicinity in a neighborhood of A such that x of sigma A of z equals x of z. So in fact what you need is involution rather than the function x. And it can be defined only using the differential form dx, in fact. So z belongs to sigma in a neighborhood of A. And such that sigma A of A equals A. And sigma A is of course different from the identity. You choose the other, of course. So if x is holomorphic, local even sigma is holomorphic. If dx has simple zeros, it's unique. And omega zero one is a meromorphic one form. In the neighborhoods of A's, of branch points. So typically locally, so locally a good local variable near A is zeta A of z, which is just x of z minus x of A. This is a local variable. And the involution is just zeta goes to minus zeta, changing the sign of the square root. But this is defined only locally, it cannot be defined globally in general. So typically omega zero one will be sum of t, ak, zeta to the 2k plus one, so 2 zeta d zeta. So times dx and dx, 2 zeta d zeta. So from k equals, in principle, from zero to infinity. But let me in fact choose the coefficient from one to infinity. And let me call the first coefficient one over t A zero. This is only the odd part plus even part. But since I'm going to take the difference, you see, I'm going to take a difference omega zero one of z minus omega zero one of sigma of z. So it means that only the odd part matters. And in fact, it's customary to normalize things slightly differently and put a 2 to the power of k over 2k plus one, double factorial. To define those coefficients t ak's. So that defines the coefficient t ak's. And t A zero is here. They are the coefficients of. k zero is kind of inwards to two zero. Mutation is not inwards. Yeah, I agree. But it's because if you do that, you will get only polynomials in the t's. Nothing in the denominator. If you put the t A zero in the numerator, you will have, it will appear in the denominator in the amp. So these are just the Taylor series coefficients. And see, it's a formal series. It does not need to have a radius of convergence. So why Cp1? Why not high dimensional? Could be another or it could be another Riemann surface sigma zero. Doesn't matter at the moment. But since everything locally, so since in fact you just look at neighborhoods, a neighborhood of a Riemann surface is always a neighborhood of Cp1. Or C. Yeah, could be just a disk in fact. On omega zero two is meromorphic one tensor one form on sigma cross sigma. This one I like it to be defined in a full neighborhood. So not only a formal series. Again, I'm not sure it's absolutely necessary. But let me assume that it's really now not just a formal series, but we have a double pole on the diagonal. So which means that omega zero two should be F's and it must be symmetric. Omega zero two in any local coordinates should behave like the Z1. So for instance, you could use the coordinates. So here this notation means plus analytic. At one Z1 goes to Z2. So validing coefficient is one. In fact, you can generalize that also. So this is the simplest case, but this can be generalized. Let me write it here. In fact, when you have several branch points, so imagine you have so sigma is a kind of cover. And here you have your sigma zero and this is the cover by X. You have one branch point here. Let's call it A1. Another branch point here. Let's call it A2. So the local involution sigma A1 is the involution that exchanges those two points. You see that over a given point here, X, you have several prime edges. You have, let's say, here three branches. And the local involution exchanges these two, but does not touch that one. It's not defined near the other one. So the local involution would exchange these two branches. So if you have two neighborhoods, so if you take two neighborhoods, what you need really is that omega zero two. So if you take omega zero two of Z1 Z2, if you want to study it in neighborhoods, so when Z1 is close to a branch point A and Z2 is close to a branch point B, it can be the same or it can be different. Okay. Well, in the local variables, square root of X minus X of A on square root of X minus X of B, basically you would like it to be like delta AB d zeta A of Z1 d zeta B of Z2 of zeta A of Z1 minus zeta B of Z2 to a square plus, and again, we shall compute the Taylor expansion plus some over K and L. Let's call the coefficient BAUK BL this way, d zeta A of Z1 to the power 2K, d zeta B of Z2 to the power 2L, d zeta A of Z1, d zeta B of Z2. Okay. Sorry. The power K zeta B of Z2 to the power 2L. So these are just the Taylor, leave some space here because I like to put a 2 to the K plus L plus 1 over 2K plus 1 double factorial, 2L plus 1 double factorial. It's just a normalization. So yes, plus, plus even parts. You see this is odd because there is 2K times 2K plus 1, so this is somehow, this is odd, or in fact, so in vocation, you can add odd parts. But the coefficients, the Taylor expansion coefficient of odd parts will play no role. So are you saying that leading part after evolution, the hyperparameterized parts remain the same? So the leading terms, yes. So after you introduce this in evolution, then it hyperparameterizes, and then the leading terms remain the same. Yes. Because everything will be invariant by this evolution, so only the part of the Taylor expansion which is invariant by the evolution matters. The non-invariant parts will be cancelled out. Just a remark here, you could replace this delta AB, can be replaced by, let me write it, one half of KAB. In fact, it's interesting to take a Carter matrix here. This is a nice generalization. You can put a Carter matrix instead of delta AB. But in this case, you have to identify local coordinates differently. Yes. So it's useful for heating systems. So in this case, you have two different local coordinates, and they are identified by this matrix. So I will stay with delta AB, but you could generalize by putting a Carter matrix. So now that we have defined a spectral curve, then we shall define the topological recursion. And the formula is written here, in fact. So it's my part 2, 2. Definition of TR. And so the definition defines omega GN by this formula. If I give you a sequence of T's and please, any sequence, would you call it a spectral curve? Yes, basically that's what I did here. Any sequence of T's defines an omega 01. Well, regarding omega 02, I like it to be really, to have a finite radius of convergence. It's not necessary, but I like to have this property. Sorry. K and L are positives. So we don't have any restrictions on this TK. You are uniquely determined by this omega. Yes. Yeah. The data of omega 01 is exactly the same as the data of the TK's. Exactly the same information. What did you say? It's also arbitrary numbers B. So those B, A, K, and B, L are more or less arbitrary, but I like to have that this series is convergent in a disk. These two really be an analytic function in a disk with finite radius of convergence, which puts some restrictions on the B's that you can choose, but they are not very strong restrictions. And you include the omega 02 in the definition of the spectrum. Yes. Yes, omega 02 is included in the definition of a spectral curve. So the spectral curve is the data of the four things, sigma, X. No, no. Why did you include it? It's in the definition of the spectrum of the deformation theory properties of the future. We'll see that in the next lecture. But in fact, there are some deformations of omega 02 and omega 01 that can be totally independent. Yes, yes, yes, I agree. Okay, I agree, but for the moment, this is my definition. This is my definition. So now this is well defined. So you define omega GN by a recursion on N on 2G minus 2 plus N. It's a recursion on that number because you see that to express the left-hand side, you need to have already computed some omega G prime N prime and you can compute a smaller value of this number. So it's a recursion on, it's a recursion. And in fact, it means that you can compute omega GN in exactly 2G minus 2 plus N steps. Yes, yes. So in the first step, you determine omega 03 and omega 11. In the second step, you can compute omega 04 and so on. So let me state some property. Oh, sorry, the definition is not yet. So that's for N larger than 1. And the definition, let me also define omega G0. Well, omega G0 contains, so basically N is always the number of variables here. There's nothing. Omega G0 is what I will call FG of my spectral curve is defined as 1 over 2G minus 2 times some of our old branch points. So I will raise you at the branch point of omega G1 of Z times a function that I will call F01 of Z where the differential of DF01 is omega 01. And that's defined for G larger than 2. So it's a definition. There is a definition for F1 and F0. But I'm not going to write. I'm not going to write them. And in fact, it's not only civil literature, there are some subtleties. In fact, F1 can be defined. You see that this formula does not make sense for F1 because there is 1 over 2G minus 2. But there is a way to define F1. It's just basically it involves not only residue, but there are logs and things like that. I don't want to enter the details. But for F0, there is a fundamental difficulty to define F0. And I will talk about that in the next lectures. There are some important subtleties about F0. So some properties. So let me state a few theorems about the properties. So first of all, there are many, many examples where you have a spectral curve. And you run this and it can compute some things that are useful for something in random matrices. In random matrices, if you take a spectral curve, you take the large n limit of the spectrum of a random matrix. Then basically you compute all the large size expansion, large n expansion of correlation functions. So miraculously with this procedure. So let me just say a few properties. The large n limit, large correlation function of what? I'm not going to enter the details. I don't want to. We can discuss that later. But it just to say that this formula does indeed compute interesting things in many cases. In Mirzani's case, you see it computes the hyperbolic volumes. If you start with another, if you start with a curve, okay, I'm going to give examples. But so just let me state some properties. So technically, omega g n is a symmetric. So that's a theorem. Omega g n is a symmetric. That's not trivial from the definition n form on sigma to v n. It's not obvious from the definition because z1 seems to play a role totally different from z2 up to zn. It seems to play a totally different role, but it's always symmetric. It can be proved by recursion. I'm going to write it. So in fact, the true, so it's meromorphic with poles only at ramification points. And of order, the order of the poles is at most 2 times 3g minus 3 plus n plus 2. Sorry? No. No. Except omega, sorry, for 2g minus 2 plus n positive. They have no poles on the diagonal. Only omega 0, 2 has a pole on the diagonal. Omega 0, 1 can have pole anywhere. Okay? But all the stable ones have poles only at ramification points. And so technically, I will write that omega g n belongs to r0 of sigma n to k sigma of omega n. Let me call it this way, as sim. So this notation means that this is the canonical bundle of sigma raised to the tensor product of n copies. And each copy, so somehow each copy corresponding to one of the factors of sigma to vn, that's what this square box means, r is the set of ramification points. So which is the set of A such that dx of A equals 0. And star means that there can be any degree. So another part of the theorem, so the really true important statement in that theorem, well, no, there are several important statements, but one thing that is not trivial is that it is symmetric. Which is not so trivial, but quite easy to see on the definition is that the poles can be only at the ramification points. And it's because we take residues at ramification points. That's the only places where you can generate poles. In fact, no, I should have said branch points because the poles can be on any pre-image of ramification points in case where you have this carton matrix. Well, so the fact that the poles are at ramification points, there is also one important property is that the residues of omega gn are 0 at any ramification point. If you take the residue in any of the variables, the residue is 0. So they are poles without residues. And this is why this definition here is well defined because you see f01 is one integral of omega 01. So it could be defined up to an additive constant. But because the residue of omega g1 is 0, the additive constant plays no role. So this is well defined because of that property. So it does not depend on the choice of primitive you take for f01. So let me state another theorem which is nice, which is that now if you take omega gn of z1 zn equals 1 over 2g minus 2 plus n sum over a. Of residue at z goes to a. Omega gn plus 1 of z1 znz and f01 of z. So for every, in fact, for n equals 0, this was the definition. But for n larger than 1, this is a theorem. In fact, this is the theorem which motivated the definition. It is often called, for the way it appears in string, when you look at applications to string theories, it's often called the D-Laton equation. In the spirit of modular spaces of surfaces, it means that if you have some surfaces of gn of z with n plus 1 boundaries, and you glue a disk to one of the boundaries, you get surfaces with n boundaries. Somehow it's the way to close a boundary, to glue a disk on a boundary. So another property that is useful is I define the rescaling of spectral curves. It's a homogeneity property. Definition. If you take lambda belongs to c star. You shall define lambda times a spectral curve. So if you take s, a spectral curve, sigma x omega 0 1 and omega 0 2, you shall define lambda times your spectral curve as just rescaling omega 0 1. Lambda omega 0 1, omega 0 2. So it's just rescaling omega 0 1. Then the theorem is that omega gn computed for a spectral curve is lambda 2 minus 2g minus n, omega gn. Basically the omega gn is our homogenous of degree 2 minus 2g minus n. This is obvious from the definition because omega 0 1 appears only there. So let me just finish by showing a few examples of spectral curves. Let me just give you a few small examples of spectral curves. So an interesting example is the following. So we have 4z4 minus 4z square plus 2. Notice that these are the Chebyshev polynomials of degree 3 and 4. So that omega 0 1 is y dx. So this one is especially useful. You see x is of degree 3, is a degree 3 covering. So there are 3 branches and there are in fact 2 branch points. So if you write dx, it's 3 times z square minus 1 dz. Yeah, okay, c. They satisfy the equation y cube minus 3y minus x4 plus 4x square minus 2 equals 0. So the two satisfy a polynomial equation p of xy. Okay, the two satisfy a polynomial equation. So there are 2 branch points and if you want to compute the sigma a of z, it can be written explicitly minus z plus a times square root of 12 minus 3z square. So you see you have 2 involutions. So basically you have sigma plus and sigma minus that correspond to choosing different branches of a square root. And this one is very useful to compute things about the Ising model. It's related to the Ising model. But I will not say how. So if you compute all the omega g n's of that, it's very closely related to the Ising model. I'm not going to say how, but it's a very useful, very interesting case. Another interesting case is consider the equation y square equals 4x cube minus g2x minus g3. So a typical elliptic curve. So it can be parameterized as follows. So s will be the torus of some modulus tau which is related to g2 and g3. And the function x of z will be the wire truss function p of z. And the function y of z will be p prime of z. And they satisfy this equation. Omega 0 1 is as usual y dx. And for omega 0 2 of z1, z2, you want something that has a double pole on the diagonal. So let's take the wire truss function of z1 minus z2. It has a double pole. You can add any constant to it. d z1 d z2. This curve is very useful and is related to cyber-witten. I think SU2. You're calling it a cyber-witten curve. Excuse me. Is it a cyber-witten curve? Yes, more or less. Yes, it's more or less the cyber-witten curve. So there are plenty of other examples. Another example, which I like, is the case where s is c minus r minus x of z is minus z plus log z. y of z equals z. And omega 0 2 is the one I usually choose for the complex plane. d z1 d z2 over z1 minus z2 to the square. OK. You can check that e to the x equals y e to the minus y, which means that y is the Lambert function of e to the x. It's often called the Lambert curve. And this is the definition of a Lambert function. This is the very definition of a Lambert function. And if you look at, if you plot x, y, it will look like that. OK. There is one branch point. Your second example does compute ground-fitting variance for result point for? In fact, is the topological string partition function? In fact, if you really want to see that it computes, yes, it's related to some, you know, in fact, to re-get the topological string partition function, you need to take x equals log of that and y equals log of that. So basically, it's when you go to the exponential variables that you re-compute the topological strings. But this one does compute a matrix model, a certain matrix model. Is the class partition function for this one, is this zebra partition function? It's related to an across-off partition function. I'm not sure which one this one is. Oh, this one? No. This, yes, it's related to an across-off partition function. It's very closely related. Well, no, not exactly that one, but something that looks like that. Indeed. Well, so the idea is that for every topological string on the Toric-Kalabi house, for instance, there is a spectral curve, and it's basically the mirror. And that's what I wanted to point out. So for instance, imagine that you take the equation, so e to the minus x, sorry, e to the x plus e to the y plus e to the minus x minus y plus q equals 0. Take this equation and you see that it's the curve, that is the mirror of the result conifold. In fact, the sigma is a torus. Sigma is a torus, and this defines two functions, x and y, on the torus. This defines one form, y dx. But basically, the fact is that e to the x and e to the y are mirror-morphic functions, so x is the log of a mirror-morphic function, and y is the log of a mirror-morphic function. So which means that the form y dx has logarithmic singularities. But that doesn't matter because the logarithmic singularities are not at the branch point, so you can still compute everything, and then this computes the omega-g n's and r of the grom of written invariance for the result conifold. This has been proved. Any curve then what you compute is the B-model function of colibb, uv plus the curve. Yes, somehow, yeah, or some generalization of that. But that's the idea, yes. The idea is that you are always computing the B-model side of grom of written invariance. So let's stop here for today. Excuse me. What number curve? Oh, sorry. Lambert curve computes the habits numbers. I can even write the full definition for Lambert curve. So yes, indeed that's an interesting example for the Lambert curve. I can write what the Lambert curve omega-g n's are. So for Lambert curve, so omega-g n of z1, zn is some of, let's call that hgn of mu. Some of our mu is such that of length n over 2g minus 2 plus n plus mu factorial, mu of e to the x1, where xi equals x of zi. So with this function x minus zi plus log of zi. Okay, and mu are partitions and so on of length at most n. So which means that some of the mu is can be 0. Okay, and hg mu hgn of mu1 mun is the number of ways of factorizing a permutation sigma whose class of class. So c mu is the set of conjugacy classes of permutations. Sorry. The conjugation class of a permutation is just the length of these cycles. So a permutation sigma with cycles of length mu1 to mun. And you want to factorize it as a product of 2g minus 2 plus n plus mu from positions. So if you take a given permutation with cycles of length mu1 up to mun, in how many ways can you factorize it into a product of transpositions with that number of transpositions? It's a certain number, this hgn of mu is a certain integer number, and this is called the Harvitz number. They should maybe should write in product percent. Product of transpositions. So it's in principle it's time to stop, but let me just give you an example. h01 of mu1 of mu. So it's the number of ways of factorizing a permutation with a simple cycle. So take the permutation 1, 2, n. So that's the cycle 1, 2, 3 and so on. n. So this is the permutation. In how many ways, so sigma equals that, in how many ways can you write it as the product of mu minus 1 equals product of tau1, tau2, tau... Sorry, it's not n, it's called... let me call that number k and we're... sorry, 1. This is the number mu. tau mu minus 1. It looks like relation from the metal group of punctured sphere, not the k and 2's curve. It looks like relation from the metal group of punctured sphere, yeah, in the product of... But for genes bigger than 1, it should add product of ai, b, commutator of ai, bi, the time, see, that's not... Yeah, we're going to consider representation of... essentially consider homomorphism of the metal group of punctured surface to symmetric group. Yeah, but for fundamental function surface, it's not free group with something from the piece of arm. When genus is bigger than 0, it's more complicated. Okay, yeah, yeah, I think it's a small mistake, yeah. Okay, but in genus 0, you agree that it's bad? Yeah, but in genus 0, you agree that it's bad? So for instance, when mu equals 2, for mu equals 2, it's just the transposition 1, 2 and there is a unique way to decompose it as a product of one transposition. No, no, no, I just said that for genus bigger than 1, it should write in different ways, not a product. It's because you need relations from the metal group of a punctured surface product. Well, here it's defined as something of a symmetric group. I'm quite sure that this is the correct definition. I have a question on this notation. So you said that omega gn is the set H0, means homological H0. So basically it's a section of a homogeneous line bundle or a water bundle over sigma n. So this theorem you prove that this is of degree some 2g minus 2 plus n. It should be followed from the theorem there that omega gn belongs to the sections of this. No, no, no, no, no. Why not? Because this is also a homogeneous section. No, no, no, it's a procedure. It depends on omega 0, 1 and the homogeneity of the dependence. So here we also have a homogeneous positive degree with this omega t. I think we are not talking about the same thing. No, we are not talking about the same thing, not the same homogeneity. So the homogeneity I was mentioning before was the homogeneity with respect to the one form omega 0, 1. And I think it's not the same you were talking about. But so if, OK, provided we have a good definition of Harvitz numbers, it is proved that the omega gn defined by the topological recursion for that spectral curve do indeed compute the general functions of Harvitz numbers. So is the Harvitz number of the spectral curve? No, no, no, it's the number of coverings with different notifications. Of the spectral. Of Cp. So in fact, this is computing the, so you have a certain number k, which is the weight of a partition new sheets to cover. So you have coverings and you have a special point, let's say, at infinity, you have a point with certain ramification profile. And here you have, let's say, three branches coming together and two branches coming together. So that would be given by that partition at infinity. OK. So it's simple Harvitz numbers. And the way, since you want to factorize it as a product of transposition, it means in how many ways can you put over branch points, which are simple branch points, you want everything to be connected. Sorry, I forgot to say connected. I forgot to say it's called as a transitive product. Transitive means connected. Transitive action, there is a group action, there is a transitive action. It's the symmetric group action. So in how many ways such that the genius of that surface would be G? So it means it's the number of homotopy classes of such decompositions. Sorry, I mixed up these coverings of high-grain curves. It's all coverings of CP1. Yes, so I think there is no problem here. So in how many ways can you cover CP1 with new sheets in such a way that you have something of G and G unconnected? So this is, so the HGN of mu is just an integer number. And you want, so this omega Gn defined here. So here it's the monomial symmetric function, the monomial symmetric polynomial. This defines the series of Z1, Z2, Zn. And the theorem is that it's the same thing that you compute by the topological recursion applied to that curve. This is a theorem. This was first a conjecture by Bouchard and Marignot. And we proved it with Moulassé and Safnouk in 2008, I think. But then we realized that it's just a sub case of something much more general. Basically, it works for all grommets with an invariance of Tauric, Calabi, our three-folds and orbit-folds. And so the general theorem is that if you take a curve, if you take a spectral curve that is the mirror of some Tauric, Calabi, our three-fold, then the topological recursion applied to that curve computes the grommet-frittany invariance of the corresponding Tauric, Calabi, our three-fold. And that's a theorem. It's been also established for orbit-folds. But it's not known if you can go beyond Tauric. Tauric orbit-folds. But it's not known if this is still true, if it's not Tauric, for instance, for Wukwintik. It's believed that it's true, that it continues to hug, but there is no proof of that. And also another interesting thing that I will mention is that if you take as a curve the apollinomial of a knot, the conjecture is that somehow you are computing the coefficients in the expansion of John's polynomials, or the Humphrey polynomials. So this is an extension of the volume conjecture. But this is a conjecture. Even the leading order is not proved. So it's supposed to be hard, but we checked it to a few orders for simple knots, like figure of eight knots, and it worked perfectly. It's not a Tauric case. It's not Tauric knots. What is B-Model Calabrian? I don't know if... Okay, I know there are some works on that. I'm not a specialist, but I'm not sure what it is. B-Model Calabrian is not fiber-tovered curve. It should not expect curve computation. It should expect computational spirit of B-C-O-V, where we integrate with the e-dimensional... But still it works. For this Calabrian... I don't know for which Calabria, but if you take a spectral curve A-Polynomial, it works. But indeed the A-Polynomial of a knot is not the mirror curve of a Tauric Calabria, as far as I know. But still it works. But it's a conjecture. Much more general than the volume conjecture. So that's the end for today. So what I want to show next time is that... So we have a recursive definition, and the good way to write recursions is to write them graphically. This is basically the picture. In fact, many of the theorems I mentioned, for instance, the fact that you get something which is symmetric, can be proved graphically. And there is a nice combinatorial way to represent this recursion, just using combinatorics, with which you find very quickly... So there is a new formalism introduced for this topological recursion by Maxim and Jan Soebelmann. And an easy way to see it is using the graphical representation. This graphical representation is very useful to compute things. And it's also what allows to find a modular space, and a co-homology class on that modular space, such that the omega-gns computed by the topological recursion are indeed integrals of co-homology classes in this MGM. And the formula is amazingly simple. I will give you a proof of Nier-Zarani's recursion. I call it a four-line proof of Nier-Zarani's recursion, but four lines are really because I expand all the details of the computation. Basically, it consists in proving that the Laplace transform of the sine function is basically a very simple function. And also, for the case of Lambert curve, the computation is quite simple. There is another formula saying that this is also called the ELSV formula. This is also an integral of our MGM bar of the Hodge class. Let me call it this way. Times product from i equals 1 to N of 1 minus mu psi i, mu i psi i, times some factors that are e to the mu i x i dx i, and probably something like mu i to the mu i divided by mu i factorial, something like that. So, if someone remembers the ELSV formula by heart, it's something of that sort and some of our mu. Basically, I will show what this corresponds to for general spectral curves. The idea is that instead of the Hodge class, we'll have another class that depends on the spectral curve, and I will give you an explicit formula for that class that generalizes the Hodge class. And for the Lambert curve, I will show you by a very easy computation that indeed we recover the Hodge class. And for Mierzahranic case, instead of Hodge class, you just need to take exponential kappa 1. And for Toric Calabio, 3-folds, it's a combination of product of 3 Hodge classes. I can stop here.
Topological recursion (TR) is a remarkable universal recursive structure that has been found in many enumerative geometry problems, from combinatorics of maps (discrete surfaces), to random matrices, Gromov-Witten invariants, knot polynomials, conformal blocks, integrable systems... An example of topological recursion is the famous Mirzakhani recursion that determines recursively the hyperbolic volumes of moduli spaces. It is a recursion on the Euler characteristic, whence the name "topological" recursion. A recursion needs an initial data: a "spectral curve" (which we shall define), and the recursion defines the sequence of "TR-invariants" of that spectral curve. In this series of lectures, we shall: - define the topological recursion, spectral curves and their TR-invariants, and illustrated with examples. - state and prove many important properties, in particular how TR-invariants get deformed under deformations of the spectral curve, and how they are related to intersection numbers of moduli spaces of Riemann surfaces, for example the link to Givental formalism. - introduce the new algebraic approach by Kontsevich-Soibelman, in terms of quantum Airy structures. - present the relationship of these invariants to integrable systems, tau functions, quantum curves. - if time permits, we shall present the conjectured relationship to Jones and Homfly polynomials of knots, as an extension of the volume conjecture.
10.5446/54707 (DOI)
10th anniversary of the revolution. Okay, so this will be chapter three of my lectures. So three will be what I will call the graphical representation of topological recursion. So let me recall that I define omega gn of z1, zn as some of our all-round ramification points residue at ramification points of a kernel that I will call kA of z1, z times. And there was this omega g minus 1n plus 1z sigma a of zz2 zn plus a sum g1 plus g2 equal g. And i1 i2 equals z2 zn. And this will be, that's what I called the sum prime but which also means no disk. Of omega g1 1 plus i1 z i1 omega g2 1 plus i2 sigma a of z i2. Right, so on kA was this quantity kA of z1z was one half of integral from sigma a of z to z omega 02 of z1. And the dot means the variable that is integrated and omega 01 of z minus omega 01 of sigma a of z. Okay, that was the definition. And let me also define something which is just a short name for omega 02. Let me just define it as a short name. Okay, the idea is that we have this recursive definition and we would like somehow to encode it in a way which is convenient for doing computations. So you see at each step we'll have to take product of k times some other omegas and at the first two steps we'll start with some omega 02. So we'll have some k times b and something like that. So let me just give an example, omega 03 of z1 z2 z3. You see it's sum over a residue at a of kA of z1 z. And here we have in that sum, well first the first time is absent because we start at gene 0, we are at gene 0. And in that sum, there are exactly two, this product contains two terms. So we have bz z2 b sigma a of z z3 plus b of z z3 b sigma a of z z2. And here are the two possibilities of decomposing the set z2 z3 into these joint subsets that are not empty. So we have exactly two possibilities, so this is what we have. So we shall use a graphical convention to write this quantity. And for that I will represent this as just a line with two, which forks in a trivalent vertex. And here I will put the variable z and here sigma a of z. And this one I will represent it as just a line z1 z2. This one has an arrow, this one has no arrow. So just a convenient way to represent this big formula is just saying this. So somehow we have z1 z2 z3, well so this should be linked, plus z1 z3 z2. So this is just a graphical, convenient notation. What this picture means is exactly the formula above. So it means each time you have an edge with an arrow, you replace it by the corresponding k. Each time you have an edge with no arrow, you replace it by a b. Excuse me? No, it's not totally, well, okay, there is a subtlety, which is that kA is symmetric. It is symmetric, but indeed when you want to compute carefully the symmetry factors, you have somehow to say that one edge, so somehow there is an orientation here. So let's put a dot on the left side, or no dot on the right side. So indeed you should be careful about that. But you have to keep in mind that in fact k is symmetric. So in fact all this is also equal to two times this one, z1 z2 z3. So either you put the dot, or you don't put the dot, but you have a symmetry factor two. Okay, that's a choice. And in fact because of this symmetry, the symmetry factors will always be powers of two basically. Okay, now let's look at another one with this graphical representation. Omega11 of z1 is sum over A residue at A, kA of z1 z. And here in the big bracket in the right-hand side, in fact the only term is the first one, which is a b of z sigmaA of z. So which graphically you represent, there is a k, there is that, and here you have a b. Again, there is that dot. Okay, so this is just a graphical representation of that formula. Okay, now let's consider another one, for instance, omega12 of z1 z2. Well, according to the formula, it contains sum over A, residue at A, of kA of z1 z. And here you have, in the right-hand side, what do you have? You have an omega03 of z sigmaA of z z2 plus an omega02, so b of z z2 omega11 of sigmaA of z, plus b of sigmaA of z, so let me write it the other way, omega11 of z, b of sigmaA of z z2. So here this was an omega03, so let me represent it this way. Okay, this is a kind of sphere, this is a sphere, it has g0, and it has three variables. Okay, this one is what I would represent as a torus with only one leg. Okay, it's just a graphical notation. It just means that omega11, I associated to it a picture of something of g1 with one leg. Whatever that means, just a graphical notation. For the moment, it does not mean anything. In the end, what I would like to prove is that indeed, omega gn is something related to mgn. So that's what we will find in the end. But for the moment, this is just a simple graphical notation. So let me continue with some examples. So omega12, let me continue to represent omega12, so omega12, so which is a torus with two legs, z1, z2, is according to this formula, this k, z1, and with two sides. And here I glue an omega03. An omega03 is that sphere. Okay, so here I was z, z plus, so there is the second term, which is that one. So here I glue a b and that plus on the third term is z1. And here you glue omega11 and the b. Okay, so but now the idea is to use again that this was already, we have already completed this omega03. It's the formula above, and it's already a sum of graphs. So the idea is that we replace this by a sum of graphs, by the corresponding sum of graphs. So this is, so we have z1. And here we start on for, oh yeah, here it's z2. It's also z2 here in the end. Okay, so here omega03 is that graph, so for instance let's, okay, let's write it this way. So here we add z on sigma i of z. And here there will be another variable, let's call it z prime on sigma b of z prime. Okay, so here we will have a vertex a and here vertex b. So what this is, this is sum over a and b. Residue at z goes to a, so this is the same thing as here. And when we compute omega03 we use another variable, z prime goes to b. And we have, so in this picture we have k of z1, z. We have a kb of z, z prime. And we have a b of sigma i of z, sigma b of z prime, and a b of sigma b of, sorry, of z prime, z2. So this big formula is, so basically this graph represents that formula. It's the same thing. So this graph is just a notation for that formula, but we have many other terms. We have, yes, you could indeed. Indeed you could. So here we could do, okay, let me do like that. Okay. So somehow this is z prime, or if you want, it's like I put the dot, no, no, no, let me do that. z prime, so this was z, sigma i of z, z1, z2, sigma b of z prime, so okay, a b. Plus, so this was for that term because omega 0, 3 contain two graphs. And now we can do the same thing here. So we have this, and here we have this. We have, so again we replace omega 0, 1 by its expression. And so we have this, plus. So in formula what does this mean? We could put a bracket here, plus b of sigma a of z, z prime, b of sigma b of z prime, z2. Plus, so this was the second graph. So now we have z sigma a of z, z prime sigma b of z prime, z2, z1. So we have, so this one will be a b of sigma a of z prime, and a b of z prime, sigma b of z prime. Plus the last term is, so if you are careful, b of z prime, b of z prime, sigma b of z prime. And sorry, times, I forgot the b. I forgot that b. There should be only two b's each time. So sorry, I forgot the one with z2. So there must be a mistake here in that graph. It was z2 here. Sorry, this was that graph. So there is a b of z, z2, which is missing. And the b, yes, sorry, this was a b of z, z2. And here we have a b of z, z, sorry. Completely wrong. We have this one, and we have a b of sigma a of z, z2, which is that one. Okay. Okay, so this is just, yes. Well, if you are careful, you see that this graph has the same value of this graph because of the symmetry. So in fact, you could write this with a factor 2. So in the end, you could also just say that this is two times this graph, plus two times this graph. Okay. You could just say that. It's equivalent. Sorry, no, they are not planar graphs. Well, you see here, well, okay, it's not subtle. They are not really planar graphs. Well, in that example, indeed they are planar, but it's not always the case. And oriented edges, for what here is it for? Yes, oriented edges, but I'm going to write it now. Oriented edges form a spanning tree of a graph, always. It's just because each time you apply the recursion, you always start by an edge arrow. So in fact, so just let me mention that the way of writing the recursion. So a way of writing the recursion is saying that something of genus G. So genus G and with n legs. So let me put the first one on the left. Okay. Equals. So according to my recursion is K times and here you put either you put something of genus G minus one. And with the same legs. Plus. And here, Z one. And here you put in all possible ways. So something with genus G one and here something with genus G two and some subsets. I don't know some subsets which I called I one and some subsets which I call I two. So this is the way of writing the recursion. Okay. So let me now state the serum. So if you recursively apply this exactly in the way I did it for the example of omega one two. If you apply this recursively, what do you have? What do you have? So the theorem. Which is kind of obvious. Which is that omega gen of Z one. Z one gen is the sum of our graphs G. Which belongs to a set that I will call GGN of Z one. The N. That will be a set of graphs. And we have a product. So. So I will describe this set but basically this set of graphs. So G has 2G minus 2 plus N vertices. Trivalent vertices. It has 3G minus 3 plus N edges. So some of them are oriented. 2G minus 2 plus N oriented edges. Forming a spanning tree. Of the graph. It's a tree with root at Z one. So it's always. So basically you will always have something like a tree. You have a tree. The root is always Z one. And you're going to have some. Okay let me take another. Okay and you're going to have also some B's. Some non-oriented ones. So let me complete. So you want the graph to be. Okay. Let me do that. That. Okay. You want the graph to be trivalent. So every vertex is trivalent. And it has. So N minus one. External legs. Are. Non-oriented. And and. At Z two. Z then. And you have G. Non-oriented. Edges. That form. G loops. So in the end you have a graph with G loops. Okay and with the but there is a non-local constraint. And the constraint is that. Those. Those internal. Edges and non-oriented edges. Can only go from a vertex to one of its. Descendants or ancestors. You are not allowed to go from one branch to another. So these are not Feynman graphs. There are less such graphs than Feynman graphs. Because of that non-local constraint. Going. From a vertex. To. Its. Descendant. Okay. And there are some additional things. Each vertex. Carries. I will say color. Which is. Which is a ramification point. An element of R. So each vertex carries a color. A. B. C. D. E. F. G. Or something like that. And each and. And. And it carries also. Variable. So each vertex V. Carries a color. A. V. And the variable Z. V. Okay. And the theorem is that. So this is a. Residue. At. Z. V. So product for all vertices. You have a product for. Arrowed edges. Which are the form V1. Going to V2. Or let's say V going to V prime. K. A. V prime. Of. Z. V. Z. V prime. And the product of nonarrowed. Edges. B. Sorry. V. V prime. Of some B of Z. V. Z. V prime. So you see that's exactly what we have been doing on the proof of that theorem is completely straightforward. What is not totally obvious is that constraint that you want to go only from one vertex to its descendants. But it's basically the way you construct those graphs recursively that implies that. And it's not difficult at all. So for instance, if you want to see what are all the graphs that contribute to this omega one two. And that it's all the graphs that I've represented. Also, yes, sorry. Again, at each vertex, there is a left side on the right side. So, or if you don't put it, it will be, it will mean you will have some powers of two. Okay. So vertices that are, there is a left vertex. And the right one is the left side on the right side. So let's say that a vertex always looks like that. The one already touches those two. Yes. Yes. Yes. At each vertex, there is one in going oriented edge. The outgoing edges can be both oriented or both non-oriented or one oriented, one non-oriented. All possibilities exist. So this is very convenient because it allows to represent very easily all those quantities. So instead of remembering formulas, you just have to remember this procedure and you can, it's easy. And also this graphical representation is really the key to proving many other theorems. For instance, that's how I use this representation to prove that the omega-g-n's are symmetric. And I use this representation to prove in fact many of the properties of a topological recursion. This graphical representation is very convenient because it makes things recombinatorial to prove. So the next, yes, another remark is that another representation. It's instead of remember that my kA of Z1, Z, I represented so far as Z1, Z sigma of Z. But it's also convenient to represent it by a thickening. So instead of having just legs, you thicken the legs and somehow a three-dimensional representation. But just notice that here, outgoing, there is only, in fact, there is only one variable Z. On both legs, the two variables are somehow the same. One is just another copy of the first one. So you would like to think that there are two outgoing things. So it looks like a pair of points. But in fact, there is somehow only one boundary which splits into two. So in fact, so somehow it's like a pair of points. But with a boundary which is a disk that has been pinched at one point, that's the good way of representing things because there is only one modulus associated to the boundary. There is only one boundary, but somehow you can glue two things. So when you're going to glue it, you will have that possibility. So Z1, and somehow here you have Z that splits in two. And same thing for B of Z1, Z2, which was omega 0, 2 of Z1, Z2, which I represented so far like that. But I will now represent it as just the cylinder, Z1, Z2. So with this representation, for instance, I have omega 0, 3 of Z1, Z2, Z3. Now I represent it as this. So now this is truly a pair of points. Z1, Z2, Z3 is, according to my representation, something like that. And here you glue two cylinders, Z2, Z3, plus the other possibility, so which is Z3 and Z2. And so on. And for instance, omega 11, which would be this quantity, is going to be just that. And here you glue a cylinder. So it's very inspiring picture. Unfortunately, we are not truly able to give it meaning in geometry for the moment. But I would say it's a beautiful picture. So the idea is that the theorem above says that if you want to compute omega gn, so corresponding to, so basically it says, so the topological recursion first, is that if you have something of gn is g with n boundaries, well, basically it's all the possibilities to remove, it looks like all the possibilities to remove a pair of points. So you see here that there is one more hole here. You are indeed creating the extra hole, plus, okay, all the possibilities to do that. So this is just a graphical representation of recursion. For the moment, it does not really mean something. For instance, in the Mirzarin case, for hyperbolic geometry, it is not known what would be the good line to cut to really give meaning to that representation. It is not known. Does it exist? It's not known. A simple representation point. Yes, for the moment we have simple ramifications points. In fact, that's my next section. In fact, this graphical representation allows to give generalization to higher order ramifications points. That's exactly what I was going to say now. So which is my number? So it's just my 3-2. Okay. So far, indeed, I have given the formula for topological recursion, only assuming that we had simple ramifications points. But this graphical representation makes it easy to define also the case of higher order ramifications points. So that was my higher order ramifications. So remember, I have the spectral curve is something. So we have a certain remand surface, sigma, and we have a projection to the base. So sigma and sigma zero, and we have that projection which I call x. And typically, near a simple ramification point in the vicinity of a simple ramification point, the map is 2 to 1. And there are exactly two branches that can be exchanged by an evolution. Now assume that we have a higher order ramification point somewhere, where we have several branches coming to... So now in a vicinity, we have not a 2 to 1 map, but a n to 1 map. So imagine A, a ramification point of order dA, and dA larger than 2. So dA is the number of branches that meet at this point. And which means that locally there is a group, there is a local, let me say local Galois group, dA that permits the branches. It means that if sigma belongs to dA, it is more or less equivalent to say that x of sigma of z equals x of z in a neighborhood of A. Typically there are cyclic groups. But when you go to... I think when you go to camera curves for heat chain system, it can be something... well, there is a possibility to take something more. But yes, indeed for curves, this is really cyclic group. So typically this will be a zdA. So indeed for a simple ramification point, this is z2, dA contains exactly two elements, the identity and an evolution. But for higher order points, it can be something more. And so let me define what I would call... so for every k such that k is between 2 and the order of dA. Let me define the equivalent of a kernel k, but now it will carry a small k. So somehow this one was the k2, kA of z1. Let me call that one p and let me call all the other z1 up to the k. For the moment they are like independent variables. Excuse me? We put z1 in the front. Yeah, let us... Okay, let me change the notation. z1, yes, p1, pk. Okay, let me change my notation a little bit so it is not confusing. And so by definition it will be slightly different from the previous one. It will be integral from a to p1 of omega02 of z1. And so the dot means the variable which we integrate. And the integration path is taken within the neighborhood of a. And product from j equals 2 to k of omega01 of p1 minus omega02 of pk of pj. So this is just the product. And I will represent that instead of representing it as before as something with... So it has z1 and here instead of having two things on the right-hand side it will have k of them. So p1 up to pk. So it will be represented this way. Sorry? No, they are not ordered. Oh, no, sorry, sorry. Yes, they are ordered. But in the end, it is exactly like before, there is the symmetry which is that the value in fact will be... The value given by the particular equation will be independent of that order. But for the moment... Z1, of course... Yes. Yes, z1 plays a totally different role. But they are symmetric with respect to all variables. No, no, k was not symmetric at all. No, no, k is not symmetric. It is also different from g2 because it is from reflected points to p1. Yes, but not now. And I had... Before I was not starting from A, I was starting from sigma of p1 and I had a one-half in front. So it is different. But it will give the same result for the topological recursion. So it is different. So somehow, before what I had defined was the symmetrization of that one. But since everything is symmetric in the end, you can indeed symmetrize on that, give the same result. But so, now the definition... The definition is that omega gn of z1, zn equals... Some of our whole branch points. Some of our whole k equals 2 to the order of the group. Residue at p, let's say, let's call it p goes to A. And here, I will put... Okay, let me put that before. Some of our whole subsets, so this is a notation A, is a subset... So included in ga minus identity. So I will take all subsets of my group except identity and I will take all subsets of cardinal k minus 1. So that's just a notation to say subset of cardinal k minus 1. Okay, so now I will have... Residue at z goes to A, ka of z1, which was my first variable here. And the next variables are z and all the set of sigma of z for all sigma belonging to A. So I have k minus 1 of them, so that makes k variables here. Times... Now I have something that's going to be like the product of omegas that appear in the right hand side. A sum of product of omegas. Just before writing it in letters, let me write it graphically. So graphically what I want to say is just the following thing. Graphically what I want to say is basically nearly the same picture as below except that now I have not only 2, but I can add 3, 4 and so on. So what I want to say is that is the following. Okay, 1, 2, up to n, okay. Is the sum of all possibilities. So you have here k of them. So you have that sum of our k. This was z1, z2, zn. Here you have g, g. And here, so k equals 2 to ga. And here you can glue things. Some of them can get disconnected, so for instance like that. So here you can have something like that. That could be a possibility. Okay, you have to split into all possible ways. So either you could have one connected piece or two disconnected, I mean two connected components or three connected components. The only thing you want, well one thing you want is that every of the z2, up to zn variables here in the right hand side must be connected to something there. I mean for instance you are not allowed to do something like that. And here is something like that. Well I mean you don't want something like that. Okay, but it's just that when we are going to write the sum of products we have to be careful that we don't have such things. Okay, so we have all possibilities to do that. Just remark one thing, if you have l components. So add all the a la characteristics, so sum from i equals one to l of chi i, the a la characteristics, plus the a la characteristics of that which is a sphere with k plus one boundaries. So plus two minus k plus one. The whole a la characteristics must be two minus two g minus n, of course. So which puts the constraints on the genus that you can have here. Okay, and the constraint on the genus is just that all these genus, g1, g2 and so on, you will have that sum from i equals one to l of g i, which must be equals g minus k plus l. In the end that's all what this says. So that's what we are going to have in this. So here we have that sum of g i equals g minus k plus l. So if we want to write the precise definition of a bracket which is here, we have to take all the possibilities to split those k things into parts. So it's the sum of our partitions. Let me write it as sum of our partitions. Sum of our mu equals partitions of k elements, of those elements of z and the set of sigma i of z when sigma belongs to a. So we have partitions, and so partitions means I don't order the subsets. Okay, it's important for the symmetry factors. So sum of our partitions. And now when I have chosen some partitions, so for instance this will be my first part and this will be a second part. Okay, when I've chosen those parts, I have to decide to what I glue them to. So now for each choice of partitions, I have to, so let's say mu is mu1 mu l. These are my parts. So now I have to take sum over g1 plus, as I say, gl equals g minus k plus l. And sum over i1 i l must be equal to all my external variables. Why mu equals to the vector mu1 to mu l? Yes. But mu is equal to those of partitions. So it means that these are the parts. It's just an notation to say that it's the parts. But how do you know that there will be l parts? Well, l is by definition the number of parts. That should be equal to this, since sigma, the partition depends on the sigma. So sigma is coming from this cardinality k minus 1. In fact, no, the partition really depends only on the cardinal of sigma. So how do you, how it is equals to l? But it's a part. The l is the number of parts. Subtraction of l. The set set is the power of cardinality k. l equals the number of parts of mu. So k, set of cardinality k, you divide it into 8 parts. So l depends on mu. It's an l of mu. l of mu. So I should write it this way, l of mu. So l is a function of mu. Some partitions have one part, some partitions have two parts, some partitions have three parts. And at most they can have k parts. So now we have the product from 1 equals to the number on all parts of omega g i of, now we have the cardinal of the part mu i plus the cardinal of the part of the set i i. Well, the main difference is that parts cannot be empty. Whereas sets can be empty. Parts cannot be empty by definition of a partition. But sets, here in the union of sets, the sets can be empty. You have mu i and ii. OK. So that's it. With a restriction. And it's the same as before, no disk. You want to have never an omega 0 1 appearing in the right-hand side. I've got a different. May I turn to this case with many variables here. P1, P1. Do I understand correctly that P1 is distinguished? Yes. But it's at the end of the day it's symmetric. Yes, in the end of the day it's symmetric. So that works. OK. That's the definition. Of the topological recursion when you have higher order branch points. And the other one was just a special case of that one when all branch points are for order 1. Oh, you understand that. Yeah, it was a different place. P1 is distinguished. Can you say P1 is distinguished? Yes, indeed. P1 is distinguished. But in the end, the result is independent of how you distinguish it somehow. Yeah. So that's the definition for higher order branch points. And there is a very beautiful thing is that, well, there is one easy way to obtain higher order branch points. So an easy way of obtaining a higher order branch point is taking a limit of several simple branch points coming together. We have a definition of the omega g n's when we have only simple branch points. And now we take a limit where several simple branch points coalesce together. And then we have another definition of the topological recursion. Is the limit of the first one equal to the other one? And the answer is yes. So that's a theorem. So limits of, well, I'm not going to write it in full details, but simple branch points. So I'm going to write it to higher order branch points. Well, in that limit, well, basically what the theorem says is that the limit of omega g n equals the omega g n of the limit. So omega g n is continuous. I will later say how omega g n's are related to integrals over m g n. And in that case, so it's all related to the, well, to give formalism and, okay, I'm not going to enter the details, but basically this theorem says that the ancestor potential is continuous. Ancestor potential. The total ancestor potential. So basically Milanoff used this theorem to prove that the total ancestor is continuous. And that has lots of consequences. Like for instance, it allows to prove Faber-Pandari-Pande's conjectures about R-spin intersection numbers related to usual intersection numbers. So, because simple branch points will be related to intersection numbers, and higher order branch points can be related, for instance, to R-spin intersection numbers. And so this theorem says that you can obtain R-spin intersection numbers as limits of usual intersection numbers. Let me mention that this theorem is not trivial at all. Because for instance, if you call epsilon the distance between two branch points, two simple branch points, each term in the formula seems to have poles in powers of epsilon. So it seems that each term could diverge as epsilon goes to zero. But when you take the sum of all terms, in fact, all the poles disappear, and there is a limit, and the limit is equal to that. So it's not trivial at all, but it's true. Does it all work as a very, very similar thing? No, no. Okay. It works if the coalescing branch points lead to a smooth higher order branch points. So the spectral curve is still smooth. So I did not write all the details. In fact, there is another theorem when it is not smooth that I'm not going to talk about now. But again, in fact, the topological recursion is always somehow well behaved under those limits. It's always in some sense commutes with the limits, except that when it's not smooth, it's after a rescaling that it commutes. But still, I mean, basically the limit of the topological recursion is always the topological recursion of the limit up to some rescaling. So it's well behaved, and it can also be compared to a Crip-Ant Resolution Conjecture. Okay. But so let me now go to something else. Yeah, I understand that you may be explicitly saying that. Can we explain this limit thing? Okay. Let's take the example of x of z equals z to vr plus epsilon z. Well, let's put epsilon to vr minus 1z. Okay. dx. Okay. Let me put minus r. Okay. When you take the, you see that when epsilon equals 0, you have 0 is a branch point of order r. And when epsilon is not equal to 0, dx of z is basically r z to the r minus 1 minus epsilon r minus 1. Okay. So which means that the 0s of dx are epsilon times the roots of unity. So 2 pi i j over r minus 1. Am I right? Yes? So basically you have several, so you have here, you have several roots of unity. Okay. Each of them is a simple branch point. So let's call them aj. Okay. Each of them is a simple branch point. When epsilon equals 0, you have only one branch point at 0, which is of higher order. Okay. So the question is, if you compute the topological recursion with that curve for epsilon non-zero, you use the usual formula for the topological recursion, you find some omega g n's, do they have a limit when epsilon goes to 0? And the answer is yes. And the limit is precisely the one I computed there. And it's not trivial because each term can have negative powers of epsilon. For instance, k, the kernel k has negative powers of epsilon. But it turns out that when you sum all the graphs and everything... Which one is easier to compute? It really depends. No, in fact, well, the last one is probably easier to compute there. No, it really depends on your problem. Well, I mean, in the end, you know that there is no pole in epsilon. But it's not so obvious from the definition. When you take the first definition, so when epsilon non-zero, each term has poles in epsilon. But when you take the sum of all branch points and so on, all the poles disappear. And there is a limit when epsilon goes to 0. So let me now say something else. So now let me go to the third part. And I'm going to go into your business. So this will be my third part, which I called ABCD Tensors, which is really what Maxim and Ian introduced. So let me just make first a remark, which is that since the topological recursion is computing, always raise use at branch points. Everything will be just in the end, combinations of the Taylor expansion coefficients of a raise use at branch points. And remember that sometimes we had some raise use with some KA. Let me write for instance this way. And here we have a B of Z2. We are going here to take a residue at Z going to Z very close to A. So we need to Taylor expanse, sorry, this was omega 0 2. We need to do the Taylor expansion of this omega 0 2 when Z is close to a branch point, but with Z2 arbitrary. So let's do this Taylor expansion. So let me take, so let's come back to the simple branch points. Everything could be done for higher order branch points, but let me go to that case for simplicity. So the local variable near A, instead of, so Z was an abstract point on the curve, but so let me give real zeta A of Z, which is just crowded of X of Z, max x of A. So this is a good local variable. I'm just a bit confused. You're taking A as a ramification point, not the branch point. Yes, ramification point, sorry. Okay, I should say ramification. So excuse me, I often make the confusion between the two. And just some image of the ramification. Yes, yes, yes. The branch point is the image of the ramification point on the base curve. You're right. So, but it's kind of abuse of language I make from time to time. So, okay. So this is a good local coordinate in the neighborhood of a branch point. And in that coordinate, the involution is just changing zeta to minus zeta. But so the idea is now we want to expand omega 0 2 in that coordinate, omega 0 2 of Z Z 2. We would like to expand it. So in two powers of zeta, let me leave some space, zeta A of Z to some power, d zeta A of Z, it's because it's to some power. Okay, let me take directly the odd powers. No, sorry, even powers to K times a coefficient on the coefficient will be a one form in the variable Z 2. And let me give the name to it, xa ak of Z 2. So it's just the name of that coefficient up to a detail. And I like to put a minus sign. I like to put a 2 to the power of K on the 2 K minus 1 double factorial. Okay. It's just a, it will be more convenient for practical applications. Plus terms which you ignore. Yes, plus the odd parts. And which will play no role because each time because of the symmetry of K, so somehow all the odd terms will always disappear from the end of the computation. Okay. So these are, so another way of saying that is that xa of K of Z 2, let's call it, is the residue is simply minus 2 K minus 1 double factorial over 2 to the K. Residue when Z goes to a of one over zeta a of Z to a 2 K, omega 0 2 of Z Z 2. So it's simply that. Yes, it must be 2 K plus 1 or 2 K minus 1. Yes, 2 K plus 1. Yes, yes, of course. Okay. So it's just the definition. So that just the definition. These are the coefficients in the Taylor expansion of omega 0 2. And so it's obvious from the definition of a topological recursion that all what you will get in the end will be combinations of those coefficients. Just because that's the only thing residues can do. So, all the curve minus ramification points. Yes, they have poles at ramification points. So in fact, it's easy to see that xa K of Z behaves as one of our zeta a of Z to the power 2 K times while there is a power. Did I write it? So while there is a 2 K to 2 K minus 1 factorial, but plus analytic. At a. So basically there is a polar part times something like 2 K minus 1 double factorial over 2 to the K something like that. Plus analytic at a. So it means that taking subtracting that to that is holomorphic at a. Yes, and analytic everywhere. So it's that's your nipol. So it's a polar for the 2 K and there is no other pole. Maybe it's 2 K plus 2. Is it an assumption that there is no other pole? It's by definition of this. The only pole of this can be at a. There can be no other pole. It's a consequence of that definition. So now let me say something which is kind of obvious, but very deep. So that let's say let's call it a proposition. Omega g n of Z1 Zn is a certain combination of let's say let's. We have pairs a1, d1, a2, d2, a n, dn of. So basically we'll have a coefficient. A coefficient. Let me give some space. I equals 1 to n of xi ai di of zi. So basically in each variable zi we can have only those things. And the coefficient. Let me give the name fgn of. So I take your notation a1, d1, a2, d2. So there exists such coefficients. It just because every reason you formula is just going to do that, it cannot do something else. Now the question is what are those coefficients? And if you write the topological recursion, it gives a recursion among the coefficients. Well, no, first let me say what are so-called examples. And so sorry, and the coefficient sorry, another property is that the coefficients fgn must be, well first this sum is finite. It's easy to see that the sum is finite. Again, because by taking risk views you always extract only a finite number of terms in the Taylor expansions. And the sum is finite. And the coefficients fgn are polynomials of the Taylor expansion coefficient of omega 01. And omega 02. And remember that I had written that omega 01 near a branch point a was something like, again I put some powers. And then there was a 2 to the k. I define it this way. Plus some k, t a k, 2 to the k over 2k plus 1, I think it was double factorial. zeta a of k, zeta a of z to the power 2k plus 1 times 2 zeta a of z d zeta a of z. And I think there are the two here. So basically it says that the fgn will be polynomials of those coefficients t ik's and polynomials of omega 02 of z1 z2. If I expand that in a vicinity, so let me leave some space. If I expand that in a vicinity when z1 is close to a and z2 is close to b. Let me subtract the pole, delta a b d zeta a of z1 d zeta b of z2 over zeta a of z1 minus zeta b of z2 to the square. There must be some Taylor expansion coefficients, some k and l. Sorry, and here there was plus, of course, even parts. And here also I'm going to write only the part that has a good symmetry, k, l. And let's call the coefficients 2 to the k plus l, 2k minus 1 double factorial, 2l minus 1 double factorial, b, a, k, b, l, zeta a to the 2k, zeta b of z2 to the 2l. So the fgn's, so what we get is that the fgn equals the polynomial of the t ik's and the b ak b l. It must be a polynomial of all those variables. In fact, if you think about it of how the recursion worked, remember that b, which was omega 0, appeared. There was an omega 0, 2 for each non-arrowed edge in the graphs and the k was in fact contained an integral of omega 0, 2. So somehow the coefficients of omega 0, 2 appear for each edge of a graph. So in the end, in those variables, this is a polynomial of degree 3g minus 3 plus n, which was the number of times they can appear. And for the t ik's, it's more complicated. It can be a polynomial of, well, the degree of a polynomial in the t ik's is more subtle, but let me write some examples. Well, when you compute omega 0, 3 of z1, z2, z3, you can compute it using the recursion formula and the result is very simple. It's sum over a, t a0, xi a0 of z1, xi a0 of z2, xi a0 of z3. So that's the final answer for omega 0, 3 for any spectral curve. It's very, very simple. So what does it mean is that the coefficients f 0, 3, basically many coefficients are 0 except, so the only coefficients that are non-zero are this one, a0. And the same a equals t a0. That's the only non-vanishing coefficient in this polynomial. Another one that's interesting is omega 11. So let me write the coefficients of omega 11. And let me write just the coefficients that are non-zero, a1 equals a0. So a1 equals, I think it's 1 over 12 or 1 over 24. So it's 1 over 12 t a0 and f 11 of a0 is 1 over 12 t a0, t a1 plus t a0, p a0 plus a1 plus a2. So that's an example. You can compute it, which means that you can see what omega-1-1 is. That is just that. Of course, this 1 over 12, you're going to see it's an intersection number. We are going to see that in a moment. But just now, let me say that the recursion, so the topological recursion, implies a recursion on the coefficients fgn. And let me collectively note those coefficients, alpha-1, alpha-n, where alpha is a pair ad. But now let me say that alpha-1, that the alphas are in a set, whatever it is. So they belong to a set of indices. They belong to a set of indices. And what is the recursion? The recursion can be written in the following way. So there exists, so basically there exists, so there exists, so the theorem. There exists four tensors that I will call abcd. These three will be rank 3 tensors, and this one will have rank 1. And the definition is just, so the definition of d is just d alpha will just be the coefficient f11 of alpha. By definition, the a of alpha-1, alpha-2, alpha-3 will just be by definition the coefficient of f03. So that's the definition. So for the moment, the theorem says nothing. The subtlety, and also now the interesting thing is that fgn of, so now for 2g-2 plus n larger than 1, fgn of alpha-1, alpha-n equals sum over, let's call them beta on gamma, of c of alpha-1, beta, gamma times, and here we have our usual combination fgn of n plus 1, gamma, delta, alpha-2, alpha-n, plus sum g1 plus g2 equals g on i1, i2 equals alpha-2, alpha-n, fgn of 1 plus i1, beta, i1, fgn of 1 plus cardinality of i2, gamma, i2, except that now I take in the sum, I don't want f01, but I also don't want f02. So that's what I will call stable, which means no disk and no cylinder. So it's a stronger thing. Plus, so this was, this is the end of that parenthesis, plus sum over, sum from j equals 2 to n, sum over beta of the tensor b alpha-1, alpha-j beta times fgn minus 1 of beta, and you take alpha-2 to alpha-n, but you remove alpha-j. So this theorem is totally obvious from the definition and writing what the c's and the b's are. So the c's and the b's are basically computed as resus involving k and b. So they are just very simple combinations. But the interesting thing is that it takes this very general form, but now you, well, now the question you could ask is, okay, take four-tenth source, a, b, c, d, and apply this recursion. What does it give? Did you say why there are sensors? Well, it just means they depend on three indices. But that's a good question. Okay, okay, to make even tensor, you need to say that they are the coefficients of some tensors corresponding to some choice of vector space. So you have to build a vector space and all that. I'm not going to enter the details, but for the moment it just means that it's a function of three discrete indices. It's just the coefficients of something. I'm not going to do that. The experts are here. Yes, some of them are up indices, some of them are low indices, but I don't want to enter the details. I'm just going to cite your theorem, which is that, sorry. So it's a tensor in a classical sense, not in algebraic sense of extending the scalars from... No, no, no, no. Why? It's a tensor product of some spaces. Okay, but let me just say... So the theorem of conservation of soil ball man, which is probably from last year, no? 2017? Or 17? 17, I don't remember, but it's very recent. But so your theorem is that now, if you define the... Let's define the following function. So let's define a vector, an infinite-dimensional vector. Let me call it x, which will be the... Which has scored in x, x alpha. Alpha belongs to the... Okay, no, sorry, a is not a good set. A good name. Okay. Well, so v indices alpha just belong to a certain set, whatever it is. And x is just a vector with coordinates x alpha. I'm going to define a function of x, which is exponential, sum over g. So it will be dependent on two things, x on the h-bar, h-bar to the g minus 1, sum over n. So everything is in the exponential. Sum over n, 1 over n factorial. Sum over alpha 1, alpha n. x alpha 1, x alpha n. fgn of alpha 1, alpha n. So somehow, I'm just saying that I take the fgn as the Taylor expansion coefficients of some function. So I just define a function from the Taylor expansion coefficient, from its Taylor expansion. I also introduce an h-bar corresponding to the expansion in powers of g. And I define that function. Okay, that's the definition. It really makes sense if vhgn are symmetric. Sorry, it's formal. It's a formal series. Okay, for the moment, it's just a formal series. But the main theorem is that there are some operators that annihilate it. And those operators are defined as... So define the operators L alpha, which is h-bar d over dx alpha minus 1 of sum over beta on gamma. So let me call a alpha beta gamma x beta x gamma plus 2 times b of alpha beta gamma x beta d over dx gamma. So h-bar d over dx gamma plus c alpha beta gamma h-bar d over dx beta h-bar d over dx gamma. Okay, minus h-bar d of alpha. Okay, that's the definition of some infinite family of operators. You have an infinite family of operators on the theorem. Then for every alpha, L alpha applied to psi equals 0. Well, there is a kind of... you can go somehow backwards. So if you have a psi that is annihilated by those operators, then the coefficients fgn have to satisfy this recursion somehow. But there are some subtleties which is that first is the rapside, annihilated by that. So for that, you have to say that the L alphas should satisfy some commutation reactions. We're close on the leap record here. Sorry? It's very close on the z c t n. So basically, one requirement is that the L alphas should satisfy a certain relation, but some of our gamma, and let me call the question f alpha beta gamma L gamma. So they should formally algebra. And in fact, so now you can ask the question, given four tensors a, b, c, d, if I take arbitrary four tensors a, b, c, d, and I define recursively fgn by this formula, will I get something interesting? The answer is in general no, for just some simple reason. If the tensors a, b, c, d are completely arbitrary, fgn will not be symmetric in the variables. So there is a constraint on a, b, c, d. This is the constraint. Yes, this is the constraint. Yes, exactly. This is the constraint on a, b, c, d. But indeed, you discovered that. So the constraint on a, b, c, d is that very such relations. But again, those relations are not so trivial to solve. And basically, what are the tensors a, b, c, d that satisfy those relations? Well, that's what you have called a quantumary structure. So, but basically what I want to say is that it's not easy to find examples of a, b, c, ds that satisfy this. So for which the fgn are symmetric, it's not easy to find some of them. But you see that if we started the other way around from a spectral curve, we always get something that satisfies that. The question is somehow, is there really something else that satisfies that? For me, it's not totally clear at the moment what else there could be. And I'm not sure. Is this example of a complex and a complex? Yes, but in fact, we have many generalizations of topological regression, which I did not talk about, which go into the non-commutative realm. And I think this could in fact be in it. And also I said that so far I was considering only cases where there is a finite number of ramifications points. So the number of ramifications points was finite. But if you allow it to be infinite with some grading, something to give meaning to the sum, I believe that then it could be this generalization that you are talking about. So in fact, so it's not clear to me how different those structures are really. And I think the answer is not known at the moment. Excuse me? From y or from z? Sorry? From g. g starts from zero. So there is a, yes, g equals zero to infinity. So there is a first, the first power is negative. Exactly like you have usually in WKB expansions. It starts typically with exponential one over h bar. So which means that to really give a meaning for that as a power series, you really have to take the log and multiply by h bar to really give a meaning. And then when you want to apply the operator, well, somehow exponentials commute well with operators. So you could rewrite everything only in terms of what is in the exponential. But h bar is complex. No, no, it's a formal variable. It's a formal variable, it doesn't. So since h bar has a negative power in the leading term. So this means that this equation is true for, as an expansion in powers of h bar. It means that all the coefficients of powers of h bar give zero. So you can't take h bar goes to zero. This limit doesn't. No, you can't because you have h bar in the coefficients. Well, there is a kind of limit h bar. No, what? No, psi is divergent as h bar going to zero. Psi is divergent. But many quantities are somehow have a meaning that h bar going to zero. But I don't want to go into deeper details about that. But so basically this is the way, this is an algebraic way of rephrasing the topological recursion. So the idea is you can have some vector space in the tensor over it. Some of those quantum error structures. This also leads to the same topological recursion. But now I want to go to intersection numbers. What is the Hamiltonian of this system? Sorry. Well, you should ask them. No, this is a wave function. Wait, you can, I mean, if you introduce a dagger, you can write the number of rates or what is it? Yes, it's somehow via the other one. Oh, yeah. No, okay. No, I think it was here. Okay. But now let me go to modular spaces. Sorry. Well, I suggest that you pursue this discussion later. I don't want, I want now to move to something else. So in fact, the idea is that now we would like, so the idea is omega gn. We saw it in the case of Mirzarani was something like an integral over some mgn of something. Could it be the case in general? And in fact, our graphical representation will allow to say that yes, it is. And I want to say exactly. And so I also show you those coefficients fgn in some examples. For instance, this f11 has a 1 over 24. And so it's very suggestive that there are intersection numbers. And in fact, I also say that all these coefficients fgn are universal polynomials of the t's and the b's. Okay. They are universal polynomials. So you can, so one way to compute them is take some special examples of spectral curves where you know what they are. So you take your favorite examples and you compute them. And it was so using that it was possible. There is a theorem that I'm going to write in a minute saying exactly what are those polynomials in terms of intersection numbers. Basically, the coefficients are combinations with intersection numbers. But before, I want to introduce intersection numbers. So, some of you know very well what they are, but I want to be instructive. So let's define mgn is the modular space of Riemann surfaces of gene g with n marked points on modular isomorphisms. Basically, two surfaces with marked points are isomorphic if there is a map from one to the other, which is holomorphic. So basically, this is a set of you have a Riemann surface of some gene g and you have n marked points p1 up to pn. Okay. Of some genus. And the marked points are labeled. So it means that you, so in the isomorphisms, I don't allow to exchange p1 with p2 for instance. p1 is one as number as label one attached to it. Okay. There is another space which is very useful. So these are just smooth Riemann surfaces. Well, it is known that this is a finite dimensional manifold, but non-compact of dimension. And it has a complex structure and it's of complex dimension, 3g minus 3 plus n. Okay. Yes. Sorry. I forgot here to say that assume that 2g minus 2 plus n is positive. Okay. Okay. And the compactification has been defined by Deline and Mumford. The compactification of this is nearly the same thing. But now we shall allow surfaces to be non-smooth p1, pn. And they are what I call nodal unstable. And so it means that now sigma gn can be something like that. It can be made of several components. Even some of them can do that with some genus. Some of them can be spheres. And the, sorry, the marked points, well, okay, let me put them like that. Okay. Some components can have several marked points. Some can have none. Sorry, this is not stable. Okay. Now it's stable. Okay. And the constraint. So nodal means that there are nodal points. Okay. You can always distinguish nodal points by saying that the nodal points somehow will become a pair of marked points. I will use that. And just so now these are the points p1, p2, up to pn. And to say that it belongs to mgn, you want the total Euler characteristics to be 2g minus 2 plus n. So sum of all the Euler characteristics, so 2 minus 2gi minus number of special points. So for each i, must be equal to 2 minus 2g minus n. And stable means that every such thing must be negative. So every component must have a strictly negative Euler characteristic. That's what means stable. So this is the stability. It means that every Euler characteristic of every component must be strictly negative. And somehow this is the complexion of that space. This is the closure of that space. And there are topologies that I'm not going to describe. And this is a quite complicated space because it has... So it's not manifold, it's not even an orbifold, it's a stack. But I'm not going to enter that. It's orbifold. But there are components of different dimensions. Sorry? No, no, no, no. The slope of Euler looks like a quashal type of input, like mgn. Okay, but I mean there are components of different dimensions. No, it's 35. Yes, it's 35. Okay, it's just I'm not familiar with the names. But so I just want to define now the cotangent bundle. So the cotangent bundle, so over each... So it's just... So I will define the bundle Li over mgn or over mgn bar. It's defined for both. So I will define this with i equals 1 to n. It's the bundle whose fiber is the cotangent space, so fiber over a point, sigma gn p1 pn, is the cotangent space of sigma gn at pi. So over each point pi, at each point pi you have a cotangent space, which is a one-dimensional space. So locally it's isomorphic to C. So it's a line bundle. It's a line bundle. Well, it's a line bundle. Okay, it's a line bundle. And since it's a line bundle, you can compute its charm class. And it has only one charm class since it's a dimension one line bundle. The charm class, it's called psi i is c1 of Li. And it's a two-form on mgn bar. It's a two-form, and you can integrate it on cycles of mgn bar. But since it's a two-form, you could integrate it on a two-cycle. And if you want to integrate on the full mgn bar, you need a form of a good dimension. And so you're going to define. So definition of intersection numbers. So there is a notation, but basically if d1 plus dn equals 3g minus 3 plus n. And from now on, I will always write dgn equals 3g minus 3 plus n. It's short to write. Then you define. So it's just a notation tau d1 tau dn index g. It's a notation is the integral of a full mgn bar of psi 1 to the power of d1, psi 2 to the power of d2, psi n to the power of dn. This is now a form of a good dimension. This is a form which has the correct dimension to be a maximal dimensional form. You can integrate it over a full space. And this is typically a rational number. It belongs to q and it's even a positive number. And so these are numbers. They are called the intersection numbers and it's only a definition. And just a remark, if d1 plus dn is not equal to 3g minus 3 plus n, you shall define them as 0. So it's very convenient because we will write sums. And basically every sum where the condition of degrees is not satisfied will somehow disappear from the sum. Just to give you some examples that are known, tau 0, tau 0, tau 0, 0 is 1. Tau 1 in genus 1 is 1 over 24 and so on. There are such numbers. And they are reminiscent of the coefficients f03 and f11 that we had before. And I want to state, so just a remark that will be kappa classes, memford, so definition of memford kappa classes. So you see that when you, there is a forgetful map from mgn plus 1 to mgn, bar, let's call it pi, which is the forgetful map. So which means that you just forget the last mark point. If you take the class psi n plus 1 to the power k plus 1, you can push it forward, you can push forward that class. And basically, so I'm not, so what I'm going to write is not totally correct, I think, basically this is the definition of the memford kappa class. At least in each integral, that's what's going to happen. So basically integrating over the position of the last mark points will be equivalent to integrating the class kappa k. And it's a 2k form. So kappa 1 is a 2 form, kappa 2 is a 4 form and so on. And kappa 0 is a number and in fact kappa 0 is the earlier characteristic. So that's useful for our purposes. Another thing which is useful is that, so somehow that's what this picture means here. Well, my goal is to write a kind of memford or kudo formula in the end. But so I need all the ingredients. So you see that for each nodal surface, there is a kind of graphical representation of a nodal surface. And so let's call the dual graph. The dual graph will be a graph for which the components become vertices and the nodal points become edges. So here I have one component, one component, one component, one component. I have p1, p2. Here I have one nodal point, one nodal point. Here I have two nodal points relating to that component. Here I have one external leg and one external leg. And I will record the genus of each component. So here it was gen 0 and there are four special points on it. So let's say that it's a 0, 4. Here it's genus 2 and 1 special point. It's a 2, 1. It was genus 1 with 3 with 4 special points. 1, 4. And here this was 0, 3. But basically this graph encodes the boundary of MgN. So the boundary of MgN basically the boundary of MgN sits in MgN bar. And the boundary, so no, here I'm talking about the boundary of MgN inside MbargN. MgN is open inside the boundary. OK. I'm a little bit, OK. I'm very introductor here. I just want to say that I'm going to say how you go to a boundary. You just pinch a cycle somewhere. You create one nodal point. So these are the one dimensional, the co-dimension one boundary. Even boundaries may not be connected. It's complex to dimension boundary. Sorry. It's not boundary to both. Yes. OK. I just want to be very sketchy at this level. I just want to say that basically I want to write this thing that delta MgN, whatever that means, is basically Mg-1 N plus 2 plus sum of g1 plus g2 equal g, sum of N1 plus N2 equals N of Mg1 N1 plus 1, Mg2 N2 plus 1. Well, OK. OK. OK. On stable. Sorry. Yes. OK. I just want to say that there is a map, in fact, in that space. Sorry. Sorry. Sorry. Sorry. Sorry. Not that. Sorry. It's just a product. Yeah. OK. Yeah. OK. Classes union. OK. And sum here means union also. OK. I just want to say that you have certain classes of boundaries that are identified with either, basically, with creating a nodal pot. And let me call that. So here, I'm not going to talk about boundaries, but classes of boundaries, so the corresponding to those different strata. So it's just, so I'm not so familiar with that notation, but I know how to compute the intersection numbers corresponding to that. And so I will call that map L, basically. And in fact, I will go from the equi, so from the classes of that into classes of that. OK. What I just want to say is that in practical computations, I just basically, I just want to explain the notations. And let me state the theorem because it's time very soon. Let me state the theorem. And I will first say what is the theorem for when there is only one ramification point on our spectral curve. And then I will write the theorem with an arbitrary number of ramification points. So which was mass sigma x omega 0 1 omega 0 2 has one ramification point. And that is simple. And let's call it A. OK. Let's take this simpler example, which was the case for Mirzarin. And it had only one ramification point. So the theorem, which is by me in 2010, and soon after I did the general case of arbitrary number of ramification points. And the theorem is the following. That omega g n, or if you want the coefficients f g n, sorry, f g n of A d 1 A d n, is an integral over m g n, is 2 to the d g n times integral over m g n bar. So, OK, no, sorry. Let me write it as intersection numbers. So there will be an intersection number. Product from i equals 1 to n of psi d i. And those are these d i's times a certain class. And what is that class? That class, I write it. Sum over k of t hat ak kappa k. Exponential one of some of those boundary classes. And the integral of the image by this l of the psi corresponding to another point. So, OK, let me write it this way, k psi p minus l. OK, I will explain the notation. It's a kind of integral over m g n bar of some class, but we just need some explanation of how you compute this. And if you want to recover omega g n, remember that omega g n of z 1 z n is just the sum over d 1 d n of f g n of d 1 d n, sorry, a d 1 a d n of chi a d 1 of z 1 chi a d n of z n. OK, it was just that. And remember that this were the coefficients. It was some other letter. Sorry, it's psi i to the d i. Yeah, but chi i was, what is the sum of some other letters? Did I? In your lecture today, it was different. The coefficients of omega 0 2, the Taylor coefficients of omega 0 2? No, zeta was the local variable. And I think psi was the, so it was omega 0 2 of z 1 z 2. What was the sum of zeta a of z 1 to the 2k d zeta a of z 1? And the coefficient was not psi a k of z 2. Not like psi. It was psi. Oh, sorry, it was psi. Sorry. You're right. OK, you're right. So just one thing for the moment. I had defined sometimes t a k's, which were the coefficients of zeta to the 2k plus 1. So my omega 0 1 was basically 1 over t a 0 zeta plus sum over k to zeta d zeta. OK. But you see, this is different names. Here I have t a k and here I have t hat a k. These are not the same. But they are nearly the same. I would say it's just, I call that the short transform. But the idea is that if you take the Laplace transform of omega 0 1 of z e to the minus u x of z. OK. Basically, if you expand this integral in two parts of u, you will recover the t a k's. There are some 2 to the k over 2k plus 1 double factorial or something like that. But you see, the Laplace transform is nice because somehow it will kill this denominator 2k plus 1 double factorial. But so if you do this Laplace transform, so just in order to make sure that it's well convergent, that it has a, so let's kill the, so 2u squared of u over squared of pi. I think if you take that now, so it's nearly the Laplace transform. And here you will integrate over something which is the x minus 1 of x of a plus r plus. OK. So a certain control, basically you extract the control on which x of z minus x of a will be positive. So then the integral will be convergent. If you expand that into powers of u, basically you recover the t a k's. But if you expand the log of that into powers of u, you recover those coefficients. So it's exponential sum over k minus sum over k of t hat k u minus k. So that's the definition of those coefficients t hat k. And it's very easy to compute. And those coefficients I had already defined. And remember that it was basically omega 0 2 of z 1 z 2 was basically d theta. Let's call it this way d d theta 1 d d theta 2 over d theta 1 minus d theta 2 to the square. Plus sum over k on l of 2 to the k plus l over 2k minus 1 double factorial 2l minus 1 double factorial. Those coefficients d a k a l, d theta 1 to the 2k, d theta 2 to the 2l, d theta 1 d theta 2. OK. Let's. I think it's actually directly which is homologous or empirical. Yeah, OK. I want to, but it's exactly, so I will give some examples, but I see that it's dying nearly. I just want to apply this in the case of Mirzharani to show that this directly implies Mirzharani's recursion. So what is 1 by i? i is square root of minus 1. Where is i? i. e to the power 1 by i sigma over delta. e to the. In this intersection. I, psi i to the i nowhere. Now there's no stir for points. No, what, what, what i are you talking about? So what happens, what happens? Oh, it's a, here, it's a one half. What's the delta? OK, I'm going to, I'm going to say it in a moment. It's a one half. It's the image on the, so basically it just means that you push forward. Yeah, so it's the boundary divisor and you take, so basically the boundary divisor corresponds to a nodal point. And the nodal point you identify to two points on the boundary divisor and you take the two psi classes corresponding to the two points. So it just means that you push to the boundary divisor, to the image by L of this boundary divisor. So it's just the image of those psi classes associated to the two halves of the nodal point. And the fact that this is an exponential means that it's well defined only when you do some Taylor expansion. And in fact this exponential will somehow means that you will have to do the sum over all possible duographs of all possible nodal surfaces. So you will, so when you Taylor expand the exponential, to first order you get one. To the next order you get basically one nodal point. To the next order you get two nodal points, then three nodal points and so on. And you cannot get more than 3g minus 3 plus n nodal points. And yes, another remark that this theorem says is that basically fgn of a1 d1, sorry, a d1, a dn is zero if d1 plus dn is larger than dgn. So that's what this theorem implies, in particular, it implies that only finitely many of those coefficients are non-zero. That's what I said, it's a polynomial. Only finitely many of them are non-zero. Remember that many of those intersection numbers, whenever you don't respect the dimension condition, all those intersection numbers are vanishing. So in fact almost all terms in this sum are in fact vanishing, except a few of them. So let me do the example of Nier-Zarani. If the dimension condition is unsatisfied, then the actual dimension is zero, then you have discrete points. Yes, so then also it's not... What's the use? No, the condition you said, the dimension condition there. So sum of the i's, sum of the i's, sum of the degrees of all forms that appear in the thing, have to be the top dimension of the space of mgn. Have to be equal to dgn. So whenever the degree of a form is not the dimension of a space on which you want to integrate it, you get zero. That's what I'm saying. So let's take an example. So my spectral curve was basically it was c, x of z was z squared, and y of z was minus 1 over 4 pi, sorry, 4 pi sine 2 pi z, and omega 0 2 of z1 z2 was just dz1 dz2 over z1 minus z2 to the square. Well, in that example, the local coordinates, zeta is really equal to z. In that example, zeta, because of that, remember that zeta of z, which is square root of x of z minus x of a, and here in that example you have a equals 0. So this is just z. So which means that this one is precisely what you subtract when you do this stellar expansion. So which means that in this example you have BAKBAL equals 0 for all KL. So that will somehow you will not be concerned by that term in that example. But now what's interesting is to compute those coefficients t hat k. So when you want to compute those coefficients t hat k, you have to take an integral of e to the minus ux, y dx. And you have to integrate from, so basically of x of z, y of z dx of z. And you shall integrate z just on r. Just one remark, this is, sorry, r in the z variable, which means r plus in the x variable. So just one remark is that e to the minus ux, y dx is just equal to one over ux. So it's just integration by parts. Okay, it's convenient. So in fact, what I want to write now is instead of that is dy of z. And two square root of u e to the, well this one is just absurd because x of 0, x of the branch point is 0. So you want to compute that, which is two square root of u over square root of pi, and you want to integrate e to the minus u z square. And here, this is minus, so you have a minus 1 over 4 pi. But when you take the derivative of that, you have a 2 pi coming, so minus 1 over 2. So you get a cos 2 pi z, which I will write as exponential 2 pi i z plus exponential minus 2 pi i z dz. And there is a 1 over 2, so 1 over 4. So let me simplify. Minus over 4 pi, if I'm correct. Okay, and well, this is very easy to compute that integral. It's fairly easy. And in the end, what you find is that, so first it's two times the same integral. So it's minus square root of u over 2 square root of pi times. And basically, we get minus 1 over, sorry, minus 2 pi square over u. Is it what you find? Sorry, minus pi square over u. No, sorry, it's going to be minus pi square over u. And basis times square root of pi over square root of u. So which would be minus 1 over 2 exponential minus pi square over u. So which means that my t hat a1 is pi square. And my t hat a0 is the log of minus log 2. Well, basically, all what I want to consider is this is minus 2. Okay. But so what does the theorem give? Sorry, I'm a little bit late. And the theorem says that omega gn of z1 zn is sum over d1 dn product of xi ai ai di. But you can check that this is the same thing as 2 di minus 1, sorry, plus 1 double factorial. So 2 di over 2 to the di, zi to the di plus 2 times. And here you have xi 1 to the d1, xi n to the dn times exponential pi square kappa 1. And you have this, sorry, and you had, remember you had 2 to the 2g minus 2 plus n. And we have this minus 2 to the, sorry, 3g minus 3 plus n. And minus 2 to the power, so I must have made the mistake, 2g minus 2 plus n. Okay, if you are careful with the powers of 2 in the end, you recover. So this is the Weil-Petersen class. And you see, in the end, these are indeed the hyperbolic volumes. So these are the hyperbolic volumes. So basically this theorem proves that the hyperbolic volumes do satisfy the topological recursion. And in fact, we proceeded the other way around. We said that the solution of the topological recursion is the hyperbolic volumes using that theorem. So this proves Nier-Zarani. And you see, I like this proof because the most difficult part of the proof is to compute the Laplace transform of the sine function. And if you apply the same thing to the Lambert function, you will find in, so again you will have to compute the Laplace transform of the Lambert function. And the Laplace transform of the Lambert function is the gamma function. The exponential, so the Taylor expansion, the asymptotic expansion of the log of the gamma function is the sterling formula. It involves the Bernoulli numbers and with almost no effort, you find the ELSG formula for all these numbers. And the most difficult part of the computation is to compute the Laplace transform of Lambert function and see that it is the gamma function. And so I like this proof. And also if you take the mirror of C3, of the Calabria manifold C3, you take the mirror of C3. Again, you compute this Laplace transform and it will be the beta Euler function, the Euler beta function, which is basically a product of three gamma functions. The mirror of C3 is basically the equation, yes, e to the x plus e to the y plus 1 equals 0 or something like that. Well, you have to take a framing plus f y, you take a framing f. And if you compute the Laplace transform of that, so now compute e to the minus u x y dx with y and x related by this formula, it's very easy to compute and it's the Euler beta function. So basically it's something like gamma of u, gamma of f u divided by gamma of 1 plus f u, something like that. What are you talking about? What is C3? C3 dimension. And it has a mirror and it doesn't know. And this mirror is that curve. Yes, it's something well known. I'm not going to enter the details, but just the most difficult part of the computation is to compute this Laplace transform, which involves three gamma functions. And if you expand that into powers of u, you get some of our bernoulli numbers b2k over 2k minus 1 times u to the minus 1 minus 2k. And since there are three of them, you have 1 plus f to the 2k plus 1 minus 1 plus f to the 2k plus 1. And it turns out that now if you take, sorry, I'm very safe in the minus, if you are careful, but now it turns out that you should take the exponential of sum of b2k, 2k, 2k minus 1. And enough now you replace this u by the class kappa. And there is this plus one half of those boundary divisors. In fact, there are a few other terms. I don't want to enter the details, but this is basically what is called as the Hodge class. This is the Montfort formula. And basically what it says is that if you look at the mirror of C3, you apply the theorem. What you get is integrals involving a product of three Hodge classes. And it's the Marine-Uwafa formula. So basically we have a theorem that in the same theorem you get Mirzarin's recursion, you get ELSG formula, and you get many other things like that. And the most difficult part of the computation you have to do is to compute the Laplace transform of some function. And I find it beautiful. So next time I will show you what it gives when you have several branch points, and then it will be clearer what the sum of the boundary divisors really mean. But I don't want to enter the details. Also the time is finished now. Thanks.
Topological recursion (TR) is a remarkable universal recursive structure that has been found in many enumerative geometry problems, from combinatorics of maps (discrete surfaces), to random matrices, Gromov-Witten invariants, knot polynomials, conformal blocks, integrable systems... An example of topological recursion is the famous Mirzakhani recursion that determines recursively the hyperbolic volumes of moduli spaces. It is a recursion on the Euler characteristic, whence the name "topological" recursion. A recursion needs an initial data: a "spectral curve" (which we shall define), and the recursion defines the sequence of "TR-invariants" of that spectral curve. In this series of lectures, we shall: - define the topological recursion, spectral curves and their TR-invariants, and illustrated with examples. - state and prove many important properties, in particular how TR-invariants get deformed under deformations of the spectral curve, and how they are related to intersection numbers of moduli spaces of Riemann surfaces, for example the link to Givental formalism. - introduce the new algebraic approach by Kontsevich-Soibelman, in terms of quantum Airy structures. - present the relationship of these invariants to integrable systems, tau functions, quantum curves. - if time permits, we shall present the conjectured relationship to Jones and Homfly polynomials of knots, as an extension of the volume conjecture.
10.5446/54620 (DOI)
Selamat datang. Ya, bisa lihat screen saya? Ya. Oke, saya akan mulai sekarang. Halo semua, selamat datang ke 2020 OpenSusA & Libro Office Conference. Biar saya memperkenalkan. Saya Mkukus Yafaat dari Indonesia. Saya akan berbicara tentang pembinaan yang berlaku di OpenSusA & Libro Office Community di Indonesia. Saya akan berbagi aktivitas di komunitas lokasi yang berlaku di upstream. Jadi, ada beberapa bagian dalam komunitas lokasi kita. Pembinaan di komunitas ini adalah penggunaan dan penggunaan Pertama, bukan semua orang di komunitas ini meminta untuk menggantikan. Pertama, beberapa orang di komunitas ini meminta untuk menggantikan, tidak membuat staff teknik. Pertama, beberapa orang di komunitas ini meminta bagian dalam penggunaan yang meminta untuk menggantikan staff teknik. Pertama, beberapa orang di meminta untuk menggantikan lebih dari satu komunitas lokasi yang berlaku di upstream. Pertama, beberapa orang meminta untuk menggantikan lebih dari satu project OpenSusA. Pertama, beberapa orang di komunitas ini meminta untuk menggantikan dan yang terakhir, memulai kontribusinya mudah. Tapi kontribusinya mudah. Jadi, mari kita mulai dengan OpenSusA di Indonesia. OpenSusA Indonesia komunitas aka OpenSusA ID, dibuat pada tahun 27. Yang satu dan satu OpenSusA komunitas di Indonesia karena komunitas di lain kawasan di lain rencana di Indonesia mungkin memiliki lebih dari satu komunitas lokasi. Di sini, OpenSusA ID di panggilan. Kita memiliki 13 memberi OpenSusA. Kita memiliki orang besar di beberapa channel tapi tidak semua mereka bergantif. Di sini, lokasi kontribusinya untuk mengubah OpenSusA ID, pemerintah, komitek, pemerintah, pemerintah, arti, dan pemerintah. Pemerintah OpenSusA ID mempertahankan di Tambul-Bit Mirror. Pemerintah di Tambul-Bit Mirror servis di Asia Regions, biasanya di Indonesia. Tambul-Bit Mirror servis di Asia Regions, seperti Indonesia, Singapura, dan Malaysia. Ada 3 pemerintah OpenSusA di Indonesia untuk menggantif oleh universitas, kejermata universitas, dan telkom. Yang menggantif oleh komunitas, oleh pelajar, oleh pelajar linux user group. Dari sini, statistik tentang OpenSusA ID di pemerintah. 4.7 total, 4.7 million total perangkatan. Dan ini untuk OpenSusA ID Tambul-Bit Mirror. 2.9 million total perangkatan. Yang lain, pemerintah pemerintah pemerintah di OpenSusA Board Election, yang dibuat dari 3 orang. Edwin Zakaria, pemerintah kita yang lama dan mentor di komunitas, adalah sebahagian dari mereka pada pemerintah pemerintah terakhir. Terima kasih, Edwin. Hero, kita ada H2 Far Dhani di OpenSusA Heroes, yang berhasil membuatkan saya, termasuk pemerintah pemerintah pemerintah, dan memperkoneksi database kurus untuk H2. H2 juga berbicara tentang portman di kluster gubernets. L10N atau translation, saya dapat memasukkan translation di OpenSusA. Saya memasukkan lebih dari 1000 string. Saya juga dapat memasukkan translation lain, seperti Genome. Dan Upwork, komunitas kami juga berkontribusi terutama untuk logos, untuk contoh logos untuk disonferensi, dan logos untuk OpenSusA Asya Samit, dari 2014 sampai 2019. Terakhir, untuk OpenSusA, kami membuatkan OpenSusA event di Asia, yang terkenal di OpenSusA Asya Samit, berkali-kali di 2016 dan 2019. Setidaknya, tahun ini, eventnya dikumpul sebab pandemi, dan kami tidak mempunyai plan untuk membuatkan konferensi di Outland. Oke, mari kita berbicara tentang komunitas Liprofis di Indonesia. Liprofis, di Indonesia, adalah sebabnya, sebabnya ada komunitas Mohor Deniman, Liprofis Indonesia Fisbu Grup yang dibuat di 2011, Liprofis ID yang dibuat di 2018, Lipro, Lengar Liprofis Indonesia, atau belajar Liprofis Indonesia, dibuat di 2016, dan mungkin di komunitas lainnya. Sementara, Liprofis ID di diisi di dalam Indonesia, dalam sebuah berbicara, kita ada 12 memberi TDI. Selain itu, kita ada banyak orang di channel-channel, tapi tidak semua mereka berbicara. Sementara tentang memberi TDI, Indonesia ada 12 memberi. Mabisnya belum diubah jadi kita tidak tahu dimana mereka berbicara. Ada aktivitas di Liprofis ID yang berkontribusi untuk translation desain QA, template, dan donate. Untuk translation, ini status kemungkinan translation di Indonesia. Mereka lebih dari 90% Translation lebih dari 90% karena kita memiliki translation hackathon dari 2 tahun lalu di beberapa kecantikan di Indonesia. Di translation, kita juga ada yang menggunakan kontributasi live-time translation di Indonesia. Dia memperkenalkan di banyak project Open Source untuk fokus pada translation. Terima kasih banyak untuk kontribusional Anda, Andika. Untuk desain, kita ada Rizal Mutakin yang membuat ikon yang cantik namun Kara Senjaga dan Sukapura. Dia juga membangun ikon di Liprofis, Kali Bre, Shiffer, dan Breesikrons. Dan kemudian Rizal Dan, kita membuat arti yang cantik untuk Liprofis 7 seperti yang anda lihat di di di-dialog. Dia juga menang untuk mengesipkan kontes screen. M4K, Liprofis ID memiliki stanes. Dia membuat bahwa semua report diperkenalkan atau tidak, supaya ini membantu memperkenalkan kualitas Liprofis. Liprofis ID memiliki Amatromahdon, aka Rania Amina. Dia adalah Liprofis multiple kontributor dari Indonesia. Dia membantu di translation, QA, desain, dan event. Rania juga memiliki kompetisi template Liprofis untuk memiliki anggapan Liprofis. Satu dari proses adalah Liprofis 10th Editiensus dari Fence Surgs Sektori. Seperti yang anda lihat di gambar, proses. Untuk template, ini adalah pembentangan template. Kanditemplate, fresh, yellow idea, growing liberty, kandit clone, and gray elegant. Anda bisa download template di Lumpung.Liprofis.id. Ya, saya menggunakan kompetisi di blog. Setelah menghasilkan template, kami di komunitas, semoga bisa dikumpulkan ke upstream. Jadi template bisa digunakan oleh orang-orang yang memiliki Liprofis. Dan yang terakhir, Liprofis Indonesia dimiliki ke upstream selepas Liprofis 7th release. Ini 292 euro. Ini adalah menghargai dari komunitas Liprofis Indonesia untuk menggantikan pembentangan Liprofis. Kita telah membuat pekerjaan yang berlaku disebut Liprofis. Saya rasa itu saja dari saya. Terima kasih banyak untuk sesi ini. Dan terima kasih dan terima kasih. Terima kasih banyak.
Not all members in the FOSS community willing to contribute to the upstream project. Most of them are end-user and enthusiasts, and the rest of them are contributors. Not all contributors are tech-savvy, they also do the non-technical stuff. Based on experiences in the local FOSS community, bring the local contribution to upstream is really challenging. In this session, I will share some activities, both in the openSUSE and LibreOffice community in Indonesia, to build up contribution and engage with the upstream community.
10.5446/54624 (DOI)
Okay, I hope you can hear me. Can you hear me or see my slides? Yes, I see your slides and I can use... I guess the sound is terrible when I do it like this. Okay, then I get started. So, I'm Peter from Hungary. I'm from St. Andrews State of the Entities. CISO can do some streamed people, but I'm working with CISO. I'm working with CISO. Let me give you a quick overview of my talk. First, I will show you some basic central collection. Show how log management... complexity of log management grows better with your organization, and how you can reduce the visibility using dedicated type of log management. I will show you how to implement dedicated log management using CISO NG. So, let's go back to basics. Central collection. Why is it so important? First of all, it needs to be used. You don't have to look into each machine to check your log messages. Instead of that, you can go to the central location and check your log messages there. It does have a priority, even if the center machine is down, you can check your log messages what happened. It's also security. The first thing when a machine is compromised is that the computer tries to remove the blocks. CISO is a central location, but if you have central locking, then the cases are corrupted by the central log collector. If you want to analyze your log messages, you also need central log collection. You don't want to install separate colonizers on each of your machines. You also need central collection to correlate events from different machines. Unfortunately, as your organization is growing, so is the complexity of log management. Once you have larger organization, then you soon will have separate teams for operation security. Each of them want to use their own log organization system. What can you quickly grow into with single-faceted resources? Of course, you have security operators, but also business users who want to work on log blocks. Most of the time, they want to use the systems. Most of these come with their own log aggregation tools. Here are some examples. The elastic stack comes with blocks, the spunk comes with forwarders, login as a service, providers, and many same systems come with their own collectors. All of these are installed on top of local assist log. For the elastic stack, most of the time you have bits on your systems which are sending log messages to Kafka or Revis for Qwing. The next step is LogStash, which analyzes the log messages and finally stores them to the sticks. Also, most of the time, users or riders send systems at an additional layer on top of existing log management. Why is it a problem? First of all, because the more software you are running, the more computing resources they need. It also means more network, but the same information travels multiple times over your network. This can be especially problematic when you are running your services in a cloud environment. In most cases, a network usage is also built for you. It also means more Qwing resources, as you need to get know with multiple software how they work, how to fine tune them, and so on. And as you can read, it can mean more security problems, as the more secure security problems. Logging does not have the ability to reduce the complexity of log management. When instead of working with logs separately for all of your analytics, you create a unified log management layer. Why is it a good idea for your organization? It can save you on computing network and human resources as well. You have to work with just one or two software instead of many. Also, if you have one software, it's much easier to push through security and operation. This will do instant maintenance than many software from many different vendors. Also, this means that log management is better for analytics and you can much easier to replace either log management or any of the analytics software. Also, you can do long term error hiding separate from analytics, which is often much deeper, especially that you store those less early ones instead of multiple analytics systems. And also, bonus, this can save you quality on energy licensing and hardware cost as well. Next, I want to show you how to implement this dedicated log management layer using Cisco NG. I use it to do Cisco NG and its four major roles and a few words about its operations. Finally, I also show you a Cisco NG configuration for the central log collection and different Cisco NG features. What is Cisco NG? It's an intense talking demon with a strong focus on portability and high performance central log collection. Originally, it was developed in C, but it can be now extended using Java and Python code as well. The first major role is data collection. Cisco NG can collect system and application logs to the door, which is quite useful for actually the higher side. It supports a wide variety of platforms, especially for these like the virtual, journal, sound streams. As a central Syslog collector, it can expose the legacy and the new system over UDP, PCP and different collections. It can collect logs or practically any kind of text data from applications through five sockets and quite a few other sources. If something is not directly supported by Cisco NG, you can extend Cisco NG using Python and create an HTTP source or Kafka source whatever you need in your environment. The next role is processing. Using Cisco NG, you can organize, classify and structure local messages we built in parsers. You can parse CSV files or any kind of column data. You can use button DB for unstructured local messages or JSON parser, DIY parser and quite a few others for structured messages. You can rewrite local messages and you don't have to think about falsifying local messages, but for example, an organization is required by different complex equations. You can format local messages using templates as needed by different destinations like JSON format or change the date format. You can also enrich data. Then I will try to continue. Sure, why not? You sound great. Okay. It disappeared for a while. Okay. So, obviously you can also send your local messages to different SIM systems or local entities software as well. Most people know only two modes of operation when it comes to logging the client which is sending local messages and the server which is collecting local messages centrally. That is a very important third mode, often forget then. It's called the relay mode. Instead of sending your local messages directly to the server, you can send messages to the relay which looks either directly to the central server or to yet another layer of relays. Why to use relays? First of all, if you have a UDP source, you want to collect local messages as close as possible and not send UDP directly to the central server as it's a lossy protocol and can easily get lost on your network. So, install really as close to your UDP source as possible and transfer logs with a more reliable protocol. The next reason is scalability. If you have a very busy central server, it's not sure that it can process all of your incoming log messages, parts and reach whatever. So, in this case, you want to distribute processing to relays which send logs to your central server for storage. Relays can also give structure to your network and provide you with additional security, meaning that if your central server is down, log messages are still leaving your host and go to your clients and go to the relay. So, I mentioned that the central local action means availability and security and this way both of these happen even if temporarily your central server is unavailable. So, a few words about log messages. Most of the log messages on a Linux system have this typical format like this SSH login, it's a date, hostname and some text. The text part is usually an almost complete English sentence with some variable parts in it. It's pretty easy to read by a human, but why it originally log messages were read by humans when you had a few large machines with just a few users. With many machines and even more users, you don't want to look at your log messages individually but create alerts and reports and their free phone log messages are quite painful to deal with. There is a solution for this, it's called structured logging. In this case, events are represented as name value bars. For example, an SSH login can be described as an application name, a username and source IP address. The good news is the SSH had name value bars inside right from the beginning date facility and so on were all represented as name value bars. So, this structure could be easily extended to handle additional name value bars. And parsers in SSH can turn unstructured and some of the structured data into name value bars. Why name value bars are so important? Because name value bars can make filtering a lot more precise. You know what is inside your logs and you are not saving just log lines, but you know what is inside. And it also means that if you have long, long messages but you need only let's say a username and an IP address from it, then you can forward only the minimal necessary data and save depending on your logs, probably gigabytes of network traffic. Or even terabytes as I heard from one of our users. Let's talk a bit about configuring system.NG. When it comes to configuring system.NG, my initial advice is always don't panic. The configuration is simple and logical even if it doesn't really look so at first site and even at the second site. And it is using a pipeline model with many different building blocks like sources, destinations, parsers and so on. And these building blocks are connected into a pipeline using log statements. Any of the building blocks can be reused multiple times. Here I show you a system.NG configuration on this first slide. Very basic configuration, which is typical for our log messages. And on the next slides, I will show you how to collect and process log messages from coming from suricata and an IDS system. So the system.NG configuration usually starts with a version number. So when you use a different version of system.NG, it can show you what to change in your configuration. If necessary, you can include external configuration files and SCR.conf here for the system.NG configuration library, which has many interesting configuration snippets in it. Like the elastic search destination is defined there or parsing credit card numbers or quite a few other useful configurations are there. You can set some global options, which you can override later on in your configuration. Here are a couple of building blocks, a source, destination and a filter, which are used for log messages usually. And there is a log statement which connects all of these building blocks together. At the bottom of the screen, there is another include any.conf files that from the directory and used as part of the system.NG configuration. And here I start showing you longer configuration for collecting suricatalogs and centrally and process and store time. First, we define a source, it's a TCP source, we're sending everywhere on port 514 and we use flex no parts as otherwise, Cisco.NG is parsing all incoming messages as Cisco messages. Next, we define a JSON parser as suricatalogs uses JSON format for logging. Unless we use the JSON parser, Cisco.NG repeats incoming logs as long string, so if we want to see inside and use the different name value pairs, we have to parse it first. Here we at the top of the screen, we define a GUIP parser which checks, which takes an IP address as input and checks it against a GUIP database and creates new name value pairs based on it, which contain the geolocation of the IP address. At the bottom of the screen, there is a rewrite which turns the results of the GUIP parser into a form as it can be interpreted as geolocation by elastic search. Next, we define a couple of destinations. At the top of the screen, we define a five destination for suricatalogs, but we use JSON formatting so we can correct all of the name value pairs from the log and see what is inside. At the bottom of the screen, we see an elastic search destination sending the very same logs and formatting the very same way to elastic search. Here we define a Python parser which helps us to resolve IP addresses into host names. At the bottom of the screen, another parser which takes IP addresses as input and adds some extra information into name value pairs like the administrator of the given machine or the services running whatever you want as you can store. Here we can see some Python code inline. This is a very simple resolver, but unless you have very high traffic, it does its job. I measured about 15,000 messages per second, so quite fast. Here is the heart of the configuration, the log statement which connects all of these building blocks together. You can have not just the previously defined building blocks here, but also have some extra logic here, practical filters. Here we see the source, the JSON parser and the next one is using the Python parser. This expression means that we check the destination IP address and we run the resolver on it only if it is not a local IP address. The next one is pretty similar. We check the source IP address and we use the parser to add information about local IP addresses. Here is another example. We check if the post name in this case sledge.org is in name value pair and if it is, we save it to a file, but we could easily change it to an SMDP destination. And of course, instead of sledge.org, we could put something more useful, but you get the logic. We can use blacklist filtering as well, so we can give a long list of IP addresses to the list filter. In this case, I checked it with a malware command and control IP address list, downloaded from a security website and we can set an in-value pair if there was a problem or the IP address was not found in the database. Next, we call the GIP parser and the revive rule. And finally, we store the logs both locally to the JSON format in Fire2elasticsearch. And here you can see a nice screenshot from Kibana showing a world map of IP addresses trying to attack my home router. What are these log-ngs and dedicated log management layers' main benefits? It provides you with a high-performance entry-label log collection. It simplifies your log-ng architecture as you can use one software everywhere instead of installing many different software for log aggregation. It's also easier to use data as logs are parsed and presented in ready-to-use format. And due to efficient filtering and routing, it also means a lower load on your destinations. If you got interested in Cisco-ng, our website is Cisco-ng.com. The source code is available on GitHub, just as our issue-taking system. You can ask questions on our mailing lists or contact developers and users on Jitter if you prefer chat instead of the mailing list. I'm not sure if we have time for questions, but thank you for your attention. If we have some time, I'm ready for questions. I think we're massively over time at this point, unfortunately. Okay. In this case, I will be here and I can answer in the chat or on telegram, whatever is better for you. Thank you, Peter, for giving us this talk. It's really amazing. I didn't realize how much has been going on in syslog-ng like 10 years after I admittedly stopped using it. I didn't realize it was doing such cool stuff. Yes, quite a lot of things happened in the past few years. It's quite a lot of fun working there. Always new technologies, new connections, where can we connect from? It's definitely worth checking. Okay. I disconnect so I can give room to the next speaker. Thank you. Thank you.
Event logging is a central source of information both for IT security and operations, but different teams use different tools to collect and analyze log messages. The same log message is often collected by multiple applications. Having each team using different tools is complex, inefficient and makes a system less secure. Using a single application to create a dedicated log management layer independent of analytics has multiple benefits. Collecting system logs with one application locally, forwarding the logs with another one, collecting audit logs with a different app, buffering logs with a dedicated server, and processing logs with yet another app centrally means installing several different applications on your infrastructure. And you might need a different set of applications for different log analysis software. Using multiple software solutions makes a system more complex, difficult to update and needs more computing, network and storage resources as well. All of these features can be implemented using a single application which in the end can feed multiple log analysis software. A single app to learn and to follow in bug & CVE trackers. A single app to push through the security and operations teams, instead of many. Less resources needed both on the human and technical side. In my talk I show you how to implement a log management layer using syslog-ng, as this is what I know best, but other applications have similar functionality. The syslog-ng application collects logs from many different sources, performs real-time log analysis by processing and filtering them, and finally it stores the logs or routes them for further analysis. In an ideal world, all log messages come in a structured format, ready to be used for log analysis, alerting or dashboards. But in a real world only part of the logs belong to this category. Traditionally, most of the log messages come as free format text messages. These are easy to be read by humans, which was the original use of log messages. However, today logs are rarely processed by the human eye. Fortunately syslog-ng has several tools to turn unstructured and many of the structured message formats into name-value pairs, and thus delivers the benefits of structured log messages. Once you have name-value pairs, log messages can be further enriched with additional information in real-time, which helps responding to security events faster. With log messages parsed and enriched you can now make informed decisions where to store or forward log messages. You can do basic alerting already in syslog-ng and receive critical log messages on a Slack channel. There are many ready to use destinations within syslog-ng, like Kafka, MongoDB or Elasticsearch. And you can easily create your own based on the generic network or HTTP destinations and using templates to log in a format as required by a SIEM or a Logging as a Service solution.
10.5446/54628 (DOI)
So, okay, so good morning everyone. My name is Patrick Fitzgerald. See you have required magic. We do a whole bunch of different things, consulting, right software that sometimes doesn't work like we were all experienced on Thursday. I'm here to talk about fiber, not necessarily its integration with with Libra Office, which is a great thing. But my experience with it and I guess I was very surprised when I discovered that it had been integrated. So let's just step through that. So, I mean, really, what's the database that's going to be the first thing we'll talk about and why it's also great. What's in the base in the first place and where did Firebird come from. They're all related. And what's good about it. Well, it's not it's not access. And I'll just step you through our use case and why I'm a fan of this particular database. And yeah, race through that. Any questions, feel free to ask at any point. I'll try and get to them at the end. There's only 15 slides. So in my experience, I zip through presentations. So what's the database? Well, it's not a big spreadsheet or kind of is. But if someone knows the definition of it, please tell the UK government because that was a massive news item over over here in the UK, where they had some automated process that was injecting millions of records, not thousands for what let's just say hundreds of thousands of records and somehow they thought that the ultimate target for this was going to be a spreadsheet. And and well, Excel run out of capacity according to the contractors that built the system. And yeah, so there are other ways to do things rather than spreadsheets, especially Excel spreadsheets. So what's in the base in the base is something that I came across in late 90s. Yes, I'm that old. I'm even older than that. I had a project that was working with we're building a job tracking system. I was a fan of Pascal and object Pascal at that point. So we bought the Borland, Delphi Studio Enterprise, which luckily came with this thing called a SQL database, which I had heard of back those back in those days. And Microsoft SQL was just, who was just starting out having having them with Microsoft having worked with Sybase. And Sybase started around the same time as as Interbase Corporation started. But Sybase started with millions of dollars worth of venture capital, whereas Interbase was started as a as a garage startup. So what did this little database actually do? It was incredibly advanced for its time. It had multi architecture. It ran on just about anything at that time. All written in C. It had API ticket triggers. It had stored procedures. You could call external code so you could you could have things that would write stuff directly to the file system if you wanted to. And of course, as I said, it was free. It came free for enterprise with Delphi Enterprise. So we had a license for it. So why not started using it? And it's it's got a long history. It's the development started of this particular product started in 1984. And, you know, it's got some interesting, interesting facts associated with it. Now, apparently, it was designed as a targeting computer for the for the N1 tank, which now physicists in the audience, please correct me, but it was built into into the tank because of the targeting system would would reset every time the gun was fired. And they had to come back up online immediately or as rapidly as possible and without any database corruption. And people have speculated it was to keep to keep track of the number of rounds fired by the gun, but I think it was actually more more sophisticated than that. It would have been some something such as some sort of radar readout or something for target sensing. And apparently the gun would be fired and it would, I would imagine it was just shocked the vibration would probably reset the computer. But I've also been told as an EMP pulse and let's say magnetic pulse that would just basically reset the computer every time the gun was fired because obviously was that it wasn't adequately shielded. Not sure if that's true, but that's what I've been told. So it was built to be highly reliable, which is the key takeaway. It's compact so the database will fit in a single file. It's accessed via a connection string by the client by the client that's accessing it. The binary sits in in three meg or back in those days sat in in three meg of Ram and three meg of this disbases as well. So the entire installation is extremely compact and and it builds databases that are highly resilient to power outages. Any kind of failure of of the infrastructure. It's almost guaranteed that the the database will be intact. And that's more than I think it'd be said for a whole lot of other databases that I've used. I don't know if I'm using any names, but I think probably a lot of us have gone through database recoveries with varying degrees of joy, shock, horror and tears. But this particular database we've never had any problems with it. So what's what's fiber then if that's in the base what's fibered. So big is a gig along a long history a little bit like Susan actually in terms of different ownership of the company. And around at some point in the in the late 90s interface was acquired by Ashton Tate and then Ashton Tate was acquired by ball and and then ball and decided that they were going to spin the interface. And then they started to take the arm of the business out into a different company take it public open source it. You know do all the trendy things that that Linux was just beginning to make possible popular in terms of the investment community. In the year 2000, the bubble burst, the venture capital funding collapsed, IPOs, public offerings of companies collapsed. It was a big problem. That's partially because mixed up with the with the build up to year 2000 and possibly what might have happened with bringing in lots and lots of consulting dollars. The whole Internet had been, as everyone thought, had been overhyped. And a lot of a lot of companies have been raised and burst and a lot of companies went down at that point. But for a little while, for a little while at least the the interface source code was open to on a repo. And so a bunch of developers, including the lead developers resigned from Borland and resigned from internet interface software corporation as it was about to be as was named. And basically took all the brains of the business away. And at the same time they they took a copy of the open open licensed software, the source code. And they announced the the birth of a new company called into firebird a new a new project called firebird in the sequel. And at that point, in a base went back to close source. You can still buy in the base there. It's a completely different product now. But firebird, but firebird was born from that point. It was just a small window where the source became available. The authors left and credit something else new. And they released about six months later version one of firebird. And at that point it supported Linux when Windows Solaris and HP UX. So why is it so good? What's really good about it? Well, it's very good about it with embedded systems. If you look at the repos on just about any flavor of Raspberry Pi build, you'll, you'll see it's, it's there. But it could it could potentially fit in smaller architectures and that I guess it could be it could be built. But really now the the smallest possible compute size realistically, we can run Linux fully is Raspberry Pi. I could be wrong there. If you need, if you need a software that that maybe you are delivering to a customer or to a device or as an architecture that you need a maintenance free data store you just need to keep your data in there. Then that's that the firebird is a perfect choice for it. It's SQLite is good. But I would say that given that it's it's not really it's designed as a single user data store and database, whereas firebird is from the very, very ground up is multi user and can carry allows concurrent access. Good for data collection unattended systems. Any other multi user system where a bigger database Postgres my SQL others Oracle, whatever Microsoft overkill and you just don't want to a put the licenses license licensing dollars into it or the maintenance into it. So something like a kiosk a web kiosk would be perfect. And of course it's very good at not being Microsoft access, which was something that we considered in the once and I know other people haven't regretted it. So. So our use case back in the late 90s had a company called the ocean web digital. And we had a drop we were thinking of a way to track the work we're doing for a number of customers. We were doing the a lot of work with them with a variety of customers in fractions of of a day. So we'd be working with it with a lot of small business and as as their outsourced IT department. So I said about designing this system thought from the outset that it should be a CGI server that would deliver HTML to the to wherever people were logged on so the customer. So if an engineer was at a customer site, we could build something we could we could build it then and there and custom and the engineers could write notes into the job into the work request or WR as we call them. And everything would keep would be the time taken to do the work would be would be tracked. And plus a desktop Windows app, it should say, for accounts that injects invoices into Sage accounts in the month and so that in that case it automated the billing process. So all the accountants or the book keepers press a button and invoices would just be spat out by by Sage. But from 19. So it lasted for about 10 years but it's beginning to need a refresh we need to put more in it and really looking at that kind of level of code, which is not lines of codes but each one of those little icons has probably several hundreds of dozens of lines of code. It became a bit of a nightmare. So complexity taken over as a just a fraction of the ID chart. So we came up with a solution called which we found in Django and I'm not sure if anyone there has had experience with Django. It's a fantastic web, web front end back end combination framework. And they had this little command called inspect DB and what inspect DB would do is it would you pointed at a database if you've got the the driver for it and Python. Some guys had just developed the Python firebird driver, which wanted be which was integrated with Django. So I thought, let's take a look at this. In fact, the beat looks into a database and create all the Django models dot pi files, which for those that don't know Django, the models files are the, the exact field descriptions of every field and every table in an existing database. So we that that gave us the picture or that we could work from as to what we needed to do to recreate what we needed. Well, we needed to recreate for the to to modernize the system. But the best thing was we didn't need to migrate data into a new database, we could just leave as is and upgrade it and it's still running is running a version three firebird at the moment. We can just migrate the client side or the website and keep the windows accounting application untouched. So that's still there has been seen an update for a very, very long time. It's been working a decade. And yeah, there's no migration. No new database infrastructure to build nothing. The result was something we called wave sweep, which was this is about 10 years old and the CUI I think, but you know, it's still functional still works are still billing customers. And it's all, it works a treat and it's easily accessible and all the work you need to do is now in a in a Python framework that is modernized. So I'm coming to the close close to the end of my time slot and I've gone past it. So that's my experience with it. I think fiber and integration with liberal offices is a good thing. I, if anyone that has out there has got the, the, the, you know, the knowledge behind how it came to be in liberal office. I don't know that I haven't really looked into it, but it's a good thing. And it's already if you're running liberal office, it's already on your machine. So you've already got a high performance database server sitting on your on your machine. So if you're a developer, if you're a project project manager or anyone that's that's used to dealing with the client problems, you know, you've already got a high performance database server on your machine and it's actively being developed. And it's, it's good for all the reasons I've outlined. Patrick, you actually 30 minutes so you don't need to rush. Oh, okay. Oh, I thought I was 15. No, you're good. Well, I've got 15 minutes for questions. Or I could do it all again a lot more slowly. If there's any questions, feel free to ask. So, so that you had mentioned that the M1 tank, yeah, tank. And so, so basically that that's to reset or each round that goes out. Well, yeah, exactly. Well, now, I don't know the the, I don't think anyone really knows the history. There, there are room, there was, they were awarded a three and a half million dollar Department of Defense contract. There were rumors that it was not designed to monitoring cleaning systems or something like that. It was, there are rumors that it was actually to do with a development of a targeting computer for the M1 tank. Whether or not that's true, I don't think anyone's going to be able to disclose that with any level of truth because of all the. Yeah, sure. Yeah, yeah. But so apparently. Now, I don't know this is why any physicists can let me know that any an EMP pulse or something hugely usually associated with nuclear weapons. Now, I know that they were. I know the US military had things such as nuclear tipped rounds that, you know, in a, in a battlefield situation like, you know, nuclear war, then they could, they could use them with low yield. And maybe it was that maybe it was for that. I don't know. But what I heard is that every, every round would, would reset the computer. But if it's an EMP pulse, I don't know. For some reason, it'll reset the computer. And that made it that meant it had to be highly reliable. And I, and a lot of the things that are already built into into interface other things that people that other other databases started adding much later in their life, life span. So, you know, it's not, it's not, it's definitely not the solution for everything, but, you know, as an embedded database, it is fantastic. Sounds interesting. I wouldn't, I would probably rule out the EMP for sure, because you have different yeah, and that that that wouldn't make sense. I mean, yeah, well, it could be that there's some sort of there's some, I don't know, there might be some sort of electronic electromagnetic wave that happens from firing a large caliber gun that only with the advent of computers inside tanks was ever noticed. And of course, computers inside tanks would have been a fairly recent use case, I'd say. Have you seen Russia's new tank? No. It's interesting. Okay. Well, but I mean, that's it's way more advanced than Abrams is now. So, right. Well, that I think the Abrams was designed in the late 70s or something. So, yeah, it's pretty old. Yeah. So yeah. I think the history of of how we got to the point where it was, you know, the multiple different ownership of the company and the fact that there are a bunch of the developers left just kind of like what happened with them. Then my sequel guys left and the community followed them. So, Firebird has got a large community of very dedicated people. And I just don't get much love and it's quite interesting that that it's now included in LibreOffice, but it's definitely a lot more complex, competent and complete in features than what you'd expect from an open source desktop productivity package. I wonder if we should have asked Florian if he knows any details as to how and why it was included. We ask is anyone in the audience know how in the base came to be included into into LibreOffice. I guess that's a no. Could be on the chat. With the chat. So what do you mean by included? I mean, it's embedded into the document. No, it's included in when you install LibreOffice. The first thing is the original inclusion was started one or two years ago or something like that and we had many problems and bug reports about stuff not working. And basically it's back into experimental modes. So you have to explicitly enable experimental features to use it, because we thought that the migration from the old database or what is it, HSQLDB some Java database I'm not expert in that. So, you know, there's problems with migration stuff from people who wanted to migrate to Firebird from the old database and stuff was simply not working. Yeah, I mean, I tried. I ported last month I bought LibreOffice to Windows ARM 64. I built Firebird with it and well, it's the cross compilation and at least for me after the second or third bug I encountered which was something with cross compilation I simply dropped it currently. So I'm not sure if somebody else already has tried to build Firebird with Windows ARM 64. It's relatively new. So yeah, it's I mean, if there's a way to get out reach out to the Firebird SQL community that I've never actually seen in any references to them at any of the big events. So it could be that they might have already done it and maybe it's not shared or not shared in the right place or. I just tried it for a few minutes and then I said okay after the second complaint from Configure I said okay there's so many other stuff I have to port to get to LibreOffice running on this platform I will just skip it because currently it's experimental anyway and the whole ARM 64 is an experimental build at this moment. I just was wondering what's the best way to build it because LibreOffice has like 10 or 15 patches to it and kind of large windows patch so I said okay, there's too much stuff to try to figure out what's going on there and I was just wondering what's going on and I looked at the home page and couldn't find exactly build instructions. Maybe I was, I was, at least when I had a look in there I didn't see anything just about foundation stuff and all the other stuff but software about how. So currently it's back to experimental. I don't know if somebody from the Firebird community would be interested in looking into stuff and help developing there but it's LibreOffice in basis. It's not well in highly highly development mode in LibreOffice because main target is definitely writer and in CULC so there are not many people interested in base. Yeah, it's one of those things that you try and talk to, well you try and get CULC to talk to different databases and a lot of times I've given up. That's probably because I haven't tried hard enough but just seeing it included I think is a good thing. I've never got into H SQL DB at all because I was already using Firebird. So I'm not, I haven't done much with it, I haven't done much with Firebird for a long time but I just, yeah I think it's a positive thing because it's a very mature code base. It's been around. It's very, very compact and very, and it should be a lot faster than access. Yes, as Peter just mentioned in the chat. So especially if you wanted to run it, you could of course because it's a, if you, you can build it or you can run the SQL, the Firebird SQL engine on a different server if you wanted to. And have, so you can build your database locally, take the file and drop it into your Firebird server and have your users access it that way. Because that's the other thing is the access to the database is a connection string username, password connection string and the connection string just specifies the database file. So there are two. Can we wrap it up so that we can go to the next talk. Sure. Okay. So yeah, I thought anyone wants to contact me about it. I'm not necessarily an expert in it, but I've had experience in it. So, yeah. Thank you very much. Thanks. Next up will be you, if I'm saying that correct. And he's going to be talking about marketing a LibreOffice in Japan. I'm looking forward to that.
With 20 years of experience in Firebird, let me take you through it's history (rumours and fact) and the varying use-cases that we've discovered for it at my company, i-Layer over that time. It really is the battle-proven database that is compact, fast, extremely reliable, and now, installed as part of LibreOffice. From managing a company-wide CRM and work tracking system to an embedded database system that controls mass Linux roll-outs - it's the RDBMS that's got it all.
10.5446/54629 (DOI)
Okay, everybody. I hope you can hear me well. This is my talk on public money, public clothes. Global problems need global solutions. And in the next 30 minutes, I will tell you a bit about our public money, public code campaign and also what happened during the corona crisis and why free software is a good solution in order to tackle this crisis when it comes to software or technical solutions. So, first of all, the Free Software Foundation Europe is a charity that empowers users to control technologies. And among these users are not just us or companies, but also governments, public bodies and so on. And so we will focus in the next 30 minutes on public bodies and how they and how shall they use free software. So let me start with this comic. So I guess you are all familiar with the US nuclear chain of command. So there's the president and the secretary of defense and so on. And you all know there is this red button. But the question always is who installed the red button and what does the red button do? And so therefore we need transparency in order to see if the chain of command is followed, if the laws are followed and if the red button is going to do what it shall do. And so therefore it's not just about transparency, but this is a first, a very important thing, but there are four freedoms of free software. And I will tell you in the next 30 minutes also why these four freedoms are especially important when it comes to government actions. So the first freedom of free software is to use it. Then you can study it, you can share it and you can improve it. This means you can use the software for any purpose without any restrictions. You can study the code as it is transparent, so it can be analyzed by anyone. So this is the red button case. You can share it with others, so without any limitations and also the price doesn't matter. So the free and free software doesn't come from free beer, but it's about these four freedoms and also you are free to improve or modify the software. So you can adapt it to your needs and also you are free to give it back to the community. So therefore whenever we have these four freedoms, then it is free software and it is very important to have free software in place in order to tackle the crisis. So why should you support free software, but also why should especially governments support free software? So first of all, it's about digital serenity. So in order to establish trust versus system, public bodies must ensure they have the full control over the software and the computer systems they are using. This is a very important and a key point. And also another main argument is that public bodies are financed through Texas. So they must make sure they spend the funds in the most efficient way possible. And these both things can guarantee by the use of free software. So it becomes a bit clearer if you compare it to proprietary software or proprietary solutions on the market. So first of all, you have a problem with the interoperability. So it's a problem to have proprietary solutions which can be connected to each other where you can share data, documents and so on. So normally you end up in a vendor look in, which means you have to choose from the bunch of software from one vendor and you can only have this interoperability guaranteed if you are using the systems of this vendor. So when it comes, for example, to updates or maintenance or things like this, you always have to come back to this vendor in order to make sure that your software is up to date or even if you want to adapt something, if you want to modify something, then you always have to go back to this one vendor and ask him to do something. And this comes with unpredictable costs for the future because this one vendor then can make the price for you or at least he will make the price more or less as hebish. And so therefore this comes with unpredictable costs for you in the future. So this vendor look in is a major issue when it comes to proprietary software. Also especially as we've seen during the Corona crisis, there's a low acceptance by citizens when it comes to closed source proprietary solutions or software. And if you have a look at this all, you have to pay for licenses. So to use the software, you have to pay a license fee and this investment is completely lost once you are using proprietary software. So you can't take the money in order to, for example, modify your software by your own. And also there are some security issues. So it's a bit harder to look in the code of the proprietary software and so therefore to find bugs or backdoors, for example. And so free software is also a solution. So if we compare it to free software all these points here, then we see that we have the interoperability by default due to open standards. So it's very easy to share data and information between free software solutions. And also you have the independence for free licenses. So you have these four freedoms, for example, to adapt the software to your needs, to have tailored software which fits to your needs and also to modify it later on and to share with other for all the four freedoms I just mentioned. And also this gives you the possibility to collaborate together and thus share risks and costs. So, for example, especially when it comes to public administrations, the demands are more or less equal. So, and therefore it makes a lot of sense to collaborate here and to work on a common software solution, for example. And also it's transparent by default, as I just said, around the four freedoms. And so therefore also the acceptance by the citizens is guaranteed and everybody can see what the code does, what the red button does. And so therefore this is also a pretty good idea. And what we've seen on the market is that there's a strong involvement of local partners when public bodies are procuring free software. I have some figures here for you as well in the next slide. And you have this transparent code. So this makes it easier to check for bugs or backdoors or any or similar things. So you can more easily make sure that your software is secure. In general, free software isn't secure by default, but you have the chance to modify the code easily and therefore, for example, around bugs and stuff like this. So if you compare it, it's a very good idea to use free software. And especially when it comes to government's public bodies, it becomes very clear that there are good arguments to do so. And as I just mentioned, let's have a quick look on the market. And for now, governments are amongst the largest purchasers of IT goods and services and comprise up to 27% of the revenue of software firms. So this means normally they are a strong, a key player on the market, but to be honest, they are not. So they are all fragmented. They are procuring for their own. They are looking for their own solutions and so on. And therefore, these 27% might look like a huge player on the market. But at the moment, they are not a key player. It's just like they're buying, buying, buying, paying for licenses and so on. So imagine what would happen if the governments or public bodies are investing this budget into free software. I guess our world would look a bit different now. And when we see or when we have a look at regulations happening, for example, here after the example of France, we see that it's also good for economy to switch to free software. So we see in France that there is, let's say, a free software-friendly regulation in place. And this already led to the fact that there is an increase in companies that use free software. There's an increase of number of IT-related startups. There's an increase of employees. That's for sure, and also what's very, very important is that there is a yearly decrease in software-related patents. And so you can see if you're starting to work on this and if you're starting to work on your regulation and if you are going more into the direction of free software, then it also helps directly your local market. We also see this with Barcelona. So they are also collaborating with other cities. And they have some rules and they're kind of strict in the national field that they say 70% of the software budget have to be invested in free software. And this led to the fact that from 3,000 companies they worked with in the last year, 60% of them have been SMEs. So you can see it's good for your local market and it's good for small and medium enterprises. And your investments are not anymore lost for licenses, but you can also foster your local IT market. So to sum up, for governments, it's important to use free software because they have strong partners in their region, especially SMEs, small and medium enterprises. They are able to have tailored software that suits their needs, not a Windows business model. They are free to have a software which is exactly done for them. And also with the transparent process, you don't have to reinvent the wheel again and again. So you can share expertise, you can share costs, and also you can use what others have already done. So and this is a very important point that you don't have to reinvent the wheel, especially when it comes to international and cross-border cooperation during a crisis. So what we've seen during the Corona crisis is that there is a global problem and that we need global solutions because this crisis is globally cross-borders and therefore we have to work together across borders. So the global crisis comes when it comes to software demands, very similar demands. So for sure it's not everywhere in the world the same, but the solutions which we have been looking for are more or less similar and the demands and needs are similar. So there is a specific need for hardware and a specific need for software. And just for a few examples, you also figured out I guess this is solutions for home office or remote working, like also video conferencing tools and so on. And also this was very important, the debate around tracing apps, there is still a debate and yeah, so this debate showed us that it's very important to have free software. And the global solution for this is that we need this interoperability due to open standards because we need to work together across borders. We need the free licenses in order to make sure that we can share software, especially in other countries who are not having the resources to develop software at the moment, for example, but they desperately need this software, so why not just sharing it with others. So that's a good thing to do. And also what we've seen in many cases that there's also collaboration across borders and thus you can foster innovation, which is also very important to have a fast solution in place, which can be then also modified by others across borders. You have the acceptance due to the transparency. We will especially around the tracing apps, we have seen that transparency is a major argument in order to convince people to use these apps. And also you can involve all stakeholders, which are somehow working on this. Doesn't mean that there's only have to be coders. There are so many stakeholders with so many knowledge around. And so therefore it's a good solution to have free software in place in order to make sure that everybody can work on a good solution. And so let's have a look at the concrete example of these tracing apps and how the debate was running. So at the very beginning of this debate, we stepped in and reached out to decision makers and said there are three demands from our side for these apps. So first of all, they have to be used voluntarily. They have to respect fundamental rights and they have to be free software. And so that's why with these three demands, we entered the debate and our arguments were well received. So for example, the World Health Organization followed in May 2020 our demand and released some considerations and said that if they are these apps in place, these tracing apps, then they have to be fully transparent and also they have to be open source. And so also within the European Union, they followed our demands. So within the eHealth network, which is a network of member states and the European Commission, they released these common toolbox for member states and set some recommendations on how these apps should look like. And they are the European Commission together with the member states that the apps have to be published transparent in order. And this is very important to make sure that they can be reused, that there is this interoperability and also they made a point about security as the code is transparent. And so this was a major thing for us that the European Commission also not just followed us on the point of transparency, but also interoperability, the reuse factor and also the security argument. So what happened afterwards is that many countries released their apps as free software. There is for example here on a Git and where you can see all the apps in the world for now which are released under free and open source license. And yes, also the European Commission just started to make sure that these apps can work together. So because in the beginning of this crisis, we have been all in the lockdown and so first of all, it wasn't more or less important to have these apps available in your country. But after the lockdown, people started to travel again also across borders. And as you know, we have a lot of member states and if every member state has their own solution, which is not interoperable, then you will have problems in order to make sure to trace contact also across borders. And so therefore, the European Commission also released the implementing decision on that and again, strengthen the point of interoperability of these apps. And so therefore they need to be free software. And so thus you can make sure that at least in Europe, most of the member states, you can have an app which is able to communicate with the app from another country. And so you can trace these or you can use these tracing apps also across borders. And this is only be possible due to the fact that it is released as a free software. So we've seen around the Corona tracing app debate that the use of free software is very important in order to tackle these crisis which are global crisis and where things happening across borders. What we've also seen is that there were loads of hackathons during the last months and weeks. And the thing is that most of them have been funded by governments. And so therefore, we also stepped into the debate and said only free software creates global solutions with all the arguments you just hear. And unfortunately, we don't have or we haven't been successful with all of these hackathons. So for example, the global hackathon doesn't release saying that these solutions have to be free software in the end. But here you can see example on how we tried to also to reach out. So we also use social networks and not just mailings and personal contacts, but we try to make it a public debate around that. And so this strategy also helped us with some other hackathons. For example, in Germany, the results have been released under free software license. So you see there's still some room for improvement, especially when it comes to hackathons and that our arguments are just here in some parts of the debate, but not everywhere. And so therefore, it's important to still reach out to decision makers and tell them about the advantages of using free software, especially when it comes to such a global crisis. And also, we've seen that for remote working, loads of us had problems, companies had problems, but also governments and public bodies had problems. So they had to choose, for example, solutions for video conferencing tools and other tools to collaboratively work together. And so therefore, we also together with our community started to write in a wiki and to bring together all free software solutions we or our community have tested so far in order to present alternatives so that not everybody have to go to Zoom, for example, but to use a big blue button. And so we also tried to collect it and to spread the word about free software alternatives, which can be more or less easily used. This is a living document. So if you have any solutions in place, please reach out to us. We are happy to add them in our wiki or if you have an account there, please also just add them. I think this is also something very important to not just say it have to be free software, but also to show that there are already very good solutions in place. And also what we've seen during the crisis was from the company side that many companies started to promote their proprietary software as free software. And so what we've done is to write a longer text on this and also again, refer to the for freedoms and go through the most strangest advertisements we have seen and have some for you here, which I want to present to you. So for example, they offered a software for free but time limited. So this is clearly not a free software because it's very likely that you have to pay fees for using the software after the crisis. Also what we've seen is that software can only be used for some workstations or by a limited number of users. And this is also clearly not free software because we know the fear for freedoms guaranteed and you can use it for as many workstations and for as many users you wish. Also sometimes the word trial was included in the advertisement around a proprietary software promoted as free software. And this is also again clearly not free software because after the trial period ends, you have to pay the full fees for the tools. And then this is also something which happened quite often that some vendors released their software for free for hospitals, schools or other specific sectors which have been hit by the crisis very hard. But also this is not free software and you might end up here in a window lock in very quickly because the for freedoms guarantees you that it's not only available in hospitals but you can use it for every sector you wish. And also it's very likely that once you started to use this proprietary software that you might have updated and upgraded it again and then you have to pay for the license in the near future. And the most strangest thing I have seen was a commercial for proprietary software with the advertisement around free software because you can win a license. I mean this is also again clearly not free software and even if you are one of the lucky winners you have to pay one day for the software. So also when it comes to updates and upgrades again. So and the point is what I want to say is it's really important to have solutions in place so this is a learning from this crisis so that we need more free software solutions especially for governments and also that they can make sure that citizens then will have also secure software in place and not have to search for the quickest and probably best solution and then after half a year they are ending in a window lock and then this is not helpful at all and bone help to tackle a crisis. Also sometimes we've seen that a creator said they will make the tool open source. I mean this is also currently not free software and also you have to be careful with these kind of promises. So because there might be only three parts they might only three parts of the software and might then stop supporting the tool with updates and so then you are forced to buy an upgrade and then again the non free version and so also this is something where you should have a look at. Also as a last point on that I want to channel the software freedom podcast which we made with the new health project just I think one hour ago Lewis gave a talk on the project and if you want to hear a podcast on that you feel free to go on our website this is a very interesting project and here you can see that there have been already good solutions in place before the crisis and this is I think something where governments should invest money in free software projects like new health and not proprietary software which pushes you into vendor look ins and don't give you the sovereignty to act fast if a crisis appear. And already before this crisis started we together with hundred organizations and hundreds of thousands of people demanded that publicly financed software developed for the public sector must be available under a free software license and our learning from this crisis is that it is now even more important than ever before and so therefore we have this public money public code in place where we are demanding so if it's public money it should be public code as well. Here you can see some of our supporters so even have if you or your organization haven't signed this call so far feel free to do so and also reach out to us if you see somebody who could or should sign this and we will also reach out and yeah so we have also several ways in how you can help us to make free software the solution for government so we are not only working on the tasks but especially with this campaign we've seen again that governments need to be convinced to use more free software and yeah it's about many small people in many small places who do many small things that can alter the face of the world so please help us in order to do so and to convince governments to switch to free software because as we've seen during the crisis it's desperately needed that we have free software solutions in place in order to make sure that we can work together across borders to tackle this crisis. So and I think now we have five minutes left for questions if I can't answer your questions right now please also send me email or reach and then yeah I'm happy to answer questions so I see a question on the free beer argument and that some companies are not giving back if I get it right so yeah for sure that's a problem that some companies just use it but especially with this campaign we are trying to reach out to governments and convince them to share their code and also work together and so I think as I just said the public bodies are one of the key players on the market and they can change the face of the software market if they step in and start to create these solutions and share it with us and also with others and so as we've seen also this helps to foster the IT market especially around SMEs and I think there are loads of good SMEs around to also give back and so a first step is then to convince governments to also step in here. Okay so I just wanted to point out the fact of the schools which are really important they are obviously public under administrations most of the times I mean there are also private schools obviously but talking widely there the most of the schools are public and I think that it is really important to care about them just because you know the schools are the starting point from the society and the future of our society so I think that this kind of campaign should be brought there as a first place as a first step and then try to begin from there just because it is important for the whole administrations but there it is also about you know the plurality of solutions of available possibilities so that they should be they should teach these kind of topics and these kind of you know notions to the people to the guys growing and attending the schools and possibly use in the schools themselves and that's it. Just to close I can say that we had I used to share a blog post advising that we in Italy have a public administration who has shared how to say they have set up many servers with for example GC and kind of this kind of stuff and Big Blue Button and whatever and they have made them available for the public schools but they have been completely ignored even being a public administration itself so they it looks like the left hand doesn't know the right hand in our Italian public administration so that's really sad and that's another reason to go there and let them know that there is public code and public software and free software and whatever and that's it and thank you for the speech it was really interesting. Thanks a lot so I'm totally with you I would be really interested in this Italian case so if you could send me email around that that would be awesome so I can follow up on that and yeah so else I think my time is over thanks for listening and enjoy the rest of this conference and yeah thanks a lot see you.
In a time when humanity needs to work together to find solutions for a crisis, we cannot afford to reinvent the wheel again and again. Global problems need global solutions! It is Free Software that enables global cooperation for code development. Any proprietary solution will inevitably lead to countless isolated solutions and waste energy and time which we as humanity cannot afford in such a critical situation. Free Software licences allow sharing of code in any jurisdiction. Solutions developed in one country can be reused and adapted in another one. Already before this crisis hundreds of organisations and tens of thousands of people signed an open letter "Public Money? Public Code!" and demanded that publicly financed software developed for the public sector must be made publicly available under Free Software licences. It is now even more important than ever before to tackle this crisis.
10.5446/54633 (DOI)
configuration tools which are shipped with the SUSE products and also an OpenSUSE. And basically they give us the UI to configure the system and underneath it use the libraries and the tools which you would normally use from the common line. For those UI modules underneath it uses the LiveWire UI which is the interface engine which provides the abstraction layer. I can develop your application in C++ or Ruby right at once and then your application will work in Qt JTK which is at the moment the development is kind of slowed down and that's why for REST API we don't support it and courses. So you basically implement the UI once using this abstraction layer and then you can run your application on the servers or also like on the desktop installations where you would have GNOME installed for instance. So how it's currently the guest components are tested. We have the unit tests which are using the RSpec framework so that explains the average choice because we also start with the RSpec and we plan to run those integration tests also with RSpec as a part of the CI basically on the same phase when the unit tests are also running and then we have the integration system testing which are running in the OpenK. You might have heard about this tool basically it can be compared to the Swiss Armin knife so it's capable of many things but for some tasks it performs them not that well because yeah you don't want to use the screwdriver from the Swiss Armin knife all the time sometimes you want the full-size screwdriver and that was exactly the case for us because first of all to run those tasks OpenK is heavily heavy and for developers they will have to spawn the whole environment to just run the tests. It's also used the screen-based mechanism which are very costly to maintain in case there are phone changes or there are some changes in UI. Therefore we came up with our solution so briefly to describe what is RSpec basically behavior-driven development framework. You can write the code there so it provides the mocking mechanism and all other of the different assertion mechanisms so it's pretty rich in that sense so yeah it can be used easily for unit integration testing in the terminology we have just mentioned and it also already has the ability to report capabilities so pretty useful. To the rest of the API basically we have our UI and we want to operate it and not rely on the some screen-based tooling so whatever we have developed is that we have the run the server side on the application so it's dynamically loaded plugin developing C++ which then starts the HTTP server where you can send the request and operate on the UI and then in this library we will just generate events to simulate the user input. So it also includes some of the flows like that as many of those frameworks do that originally you can also operate on some controls which are actually disabled in the code so sometimes in some cases it provides more capabilities than the normal user would have so obviously yeah we still need to cover that part somewhere else and not to miss the regressions in this field. So it provides not only the way to operate on the controls but also to read the state of the properties which is a mentor for testing so here you can see the example for some sample application with just table in it so it lists the items which are currently displayed there is also like the property I'm not sure if it's big enough we chose if the which entry is currently selected so you can verify those and also operate that we'll see later. So besides the server side we also have developed the client side so to run the to write the test in the R-spec and yeah so we started with the Ruby I will mention the third steps I plan to do so yeah we just basically use the functionality of their spec and the advantage here that yeah we just use the IDs of the controls you can write the test besides some small exceptions your test can be run executed in QT and N-curses so you basically can test both things at the same time without no extra cost for development. So yeah let me just demo briefly what we have so I've written a small test to just demonstrate the capabilities so there are some yeah it's not ideal test so we have the yes module which basically just demonstrates the content of the etc host and allows to edit it and there are three tests one just verifies that there is a local host entry with the Lubeck interface IP address and the one which adds the new entry and then the one which deletes it so we just can execute this test I've introduced some slips into so that we can see that because otherwise it would run real fast so you can see that yeah we have entered the entries here in these text fields then we press okay so the entry appeared here then yeah we'll just press the delete button and then close the application and then we basically can do the running N-curses by just starting it there are like tricky parts and in wrapping it so in case there are questions I can share more details about this and yeah as always during the presentation something didn't work yeah and basically we can run the same application in N-curses the same test in N-curses and do the same test right so for the third step so we'll start yeah there are a lot of things to develop so basically we want to start benefiting from this framework called Redini so we have quite some items to implement including the support of HTTPS and also as the in OpenK our tests are mainly written in Perl we plan to develop also the Perl module for decline site support so yeah I've added some references in case you're interested because we aim here else to advertise a bit of this framework and if there will be some open source communities around it that will be really we'll really appreciate that so yeah thank you for your attention are there any questions yeah I guess like I'm right on time so but there is no other talk afterwards I guess maybe we can spend a couple of minutes I know there is one yeah actually there is yeah we had the rescheduled service there yeah so thank you if you do have any questions for right on just probably paying them on telegram I assume you're on that right I will join yeah cool so and Sarah's talk is up next and then we'll be right on schedule and we'll have like a 15 minute break between her talk and the other talk that we I believe to start so okay can you go ahead and write on the stop sharing your screen and Sarah could you go ahead
libyui is a library which allows writing applications in ruby and then run them in ncurses or qt for no additional costs. Before, there was no specialized framework for the integration tests. libyui-rest-api and client ruby library allows writing tests using rspec and significantly reduce maintenance of the tests in comparison to screen based testing tools. The solution allows to query the UI properties over HTTP using a REST API. This allows to automate the UI interaction steps and avoid screen-based tools. The API allows reading properties of the UI, so the displayed values can be validated. It also allows interacting with the UI: clicking buttons, toggling check boxes, entering text. In combination with client tests we can write scalable tests and validate UI in details. During this talk we will learn how the framework works and how it can be used to test YaST applications. We also will briefly cover testing on different stages of the development cycle in context of YaST modules, rspec and libyui.
10.5446/54635 (DOI)
So let's talk about what is open-suzza leap. So you have two distributions in open-suzza. Actually, you could say maybe even more if you would come micro as and so on. But if we talk about the main ones, it's it's leap and tunnel leap, right? And the leap I like to say is trying to bridge community enterprise. And it's the distribution, which is based on the latest version of Suzza Linux Enterprise available to the date with typically a 12 months really cycle. I really like to use work typically here because of some of the next slides and the latest version of Suzza Linux Enterprise is also important for the very last slide. So the one earlier this year we had the retrospective and we asked users, what do they value about leap? You know, how did the release go and so on? It seems like the strong sides that were actually embraced by community where the installer stability seamless migrations, which were reported to be effortless than people in general loved just like maybe not the entire, you know, all the parts, but like generally they did. So this would be the, I would say, biggest strengths of leap. Some of them may actually be applicable also probably, but regarding migrations and stuff like this is this is the leap area. Distribution is often profiled as the most table one and easy to use. You can see on Reddit and so on. And, you know, I guess that the reason for it is that there shouldn't be any radical or disruptive changes in between minor updates. So and that's the transfers to service packs in sleep, but generally like we shouldn't, you know, completely break down distribution when it comes to packet set in between releases. And therefore I expect no issue, no bigger issues on migrations box says that, you know, this is the Linux distribution for begging beginners and pros. And some users say that this is the D KD distribution. So this this much about leap. Is there anybody who didn't hear about leap before just just to double check with the audience. Yeah, that's good. CTLG. That was already talked about Marcus Noga who kind of covered the building. Containers and closing believe the best wall here about closing the leap gap or skip the presentation from Marcus. Like I literally have like three slides about it. So not much. And I can go a little bit deeper if it's necessary. I think it everybody has some broad idea. So CTLG is a suicide driven effort to bring leap and sleep closer together than ever before. There's just bring some challenges, you know, some some new opportunities and also some pros and cons. I will talk about these opportunities and so on on my next talk here I really want to talk about the schedules and you know what to expect from next releases. If you would look at the effort, some people just see that we will change the way how the distribution is built, but there is a little bit more than that. So the categories that I see are basically that we are trying to unify the coast stream. You know, it may have been the case that leap and sleep sources were the same but if you build them in different built environment they actually have different outputs right maybe different features are enabled different rpms are built and so on. So we really want to unify this so this is the same with the exception of maybe branding and some other other features that can be very easily documented. The next part is the concept of building distribution which is what Marcus was talking about so we really want to reuse a Susie Lee signed rpms and combine it with outcome of open system backwards basically build once and chip twice if I simplified and you will have a 390 annually introduced in the next version of leap and also that comes also with real time that we have from sleep and so on. Then there are some tools and processes that we are introducing to community because if you if you base leap like on binaries that don't get rebuilt in obs just just sing them over and then use them. Like, you know, you don't have really good way to to, you know, change these binaries right and we really want to make sure that community has tools to submit change code requests against these binaries and you know like maybe open features if you don't feel like you know you are. You are gilips especially but needs more change or something like you should have a way and bugs allies for bugs and this is the case for sleep as well so I really want to make sure that community can use the features, you know, in the tool which is actually used for feature tracking and it's a really good way to do an engineers work on that in the same tool and so on. If you are interested I have a public talk about this exactly how the mirroring works, how you know the access to general looks like, and that's on the YouTube it's a public video from last week so feel free to watch it. I guess that if you do something like community requests, Susan, Susan up so you will you will find it. So open Susie Jam and where we are now and you had a question was the current status I hope that this will sort of you know give him some idea, and then dealing to the go no go decision has some more details than this that will be on the next slide. So, jump is basically the implementation of closing the lead gap. So it's a concept how to build distribution and the processes around basically the three categories that you've seen before. So what do we have so we have distribution images which are available available for wider testing since I believe late August 2020. So these are based on sleep 15 SP to updates that's very important. So we are actually, you know, so it's the 15 SP to and some forks of features that were rejected like for example, Susan didn't want to introduce maybe support of free Pascal compiler and GB test results and therefore we for GB to rebuild it, you know, it's the same source it just it's either built with or without the support. And so we for that, for example, and a few other cases like install images on someone. So this is on top of SP to there we actually have to take some packages from SP three to get migration working for example so there are few pieces regarding yes taken from SP three, but by the baseline is SP to updates redirection and mirroring of submit request is currently deployed so you can use that we can test that if you want if you reach out to me. So you can basically submit a submission against let's say bashing jump 15 to and if I approve the submission it will get mirroring to internal suze build system and it is major for Susan Linux enterprise depends on you know the origin of the bash and bash package in sleep I think this is the 15 updates. So they would actually see submission and in this case maintenance team notes notes the release measure would be processing it. So this works now. The only part which is missing this reporting back the updates from bots and so on but I was told that this should be done by end of the month. So just to confirm, we cannot just hot sleep like you cannot create let's say 1000 submissions like there is a person who is moderating them the person is part of the suicide review team so basically it's a new interview and new review and introduced for submissions against uses the project. So it's approved with combination of policy plugin, you know running like you can actually create the clone of the submission in the idea so it's not really request the plugin like it doesn't work just based on the approval. So we have a pilot for Jira access to community members. Thank you, Neil for for being part of the pilot. So we are regularly meeting on Mondays like in case you know like there is nothing like blocking it. And we go through currently open features and we make sure that some of them have updates and we review what's blocking and so on. So we have currently small budget for the pilot. And I'm not sure how it will be in the future I hope that you know this will get some popularity and we can improve improve the budget and so on. But that I have currently no more information about that unfortunately. But generally this is the way how she's it would like to see it happening so I believe that support first from Susie will be there. And migration. This is currently probably the most problematic problematic topic. We still hit some issues that are being worked out, but like, it's not effortless but what you can expect like once we are done is that basically you install. Let's say, jump. And then you can see that the migration is not really in the sleep and what happens is that, you know, aside from enabling the suitably repose, you will just exchange branding packages and the rest of packages in default installation unless you choose to take KDE will be identical to sleep packages so they shouldn't really be reinstalled. And the time actually spend on the migration should be way way way shorter. So the setup is unfortunately still in progress I believe that we are still missing the support for patch info which is being worked out otherwise to me there are no other blockers. So zipper update dash P wouldn't work with the current setup. Here is the image I know it's not as nice and simple as on Marcus's slides but basically the green parts are the parts which are introduced as processes or tools. So the, can you see my mouse actually when I'm moving it. So if you would look on the upper middle part of the screen you would see the small diamond which is the redirection of the submission so if you always submit to jump or maybe futurely plus elite 153 and based on where the package comes from if it's backports if it's if it's jump because we had to fork the package like it will be redirected what it's supposed to be if it's susie sleep you know you know that there is an extra review. I will or Max or somebody from open Susie release team will approve the review and then the the OSC plugin will actually mirror it into internal SUSE build service and that will actually appear as a regular Susie sleep submission with with a big banner that this is a public submission and that the information is being synced out outside to OBS. So and then once it's accepted we actually inherited and then the future version of people basically inherited as binaries right after the build Evan finished in the in the IBS that triggers the thing and therefore we will get you know binary as well. Or jump in this case but I actually draw it before we knew the code name I believe so it's leave on the on the picture. Here is the Jira on the left upper part the green box is the Jira features. And there is that contributor who opens them to a little guy on the left. And these get to the process and our process as regular partner requests which is nice. We haven't had that before. And yeah and the thing is on the single for pms and sources from IBS to OBS is on the left upper part sorry right upper part the green. How would you call it. It's not boxes right. Yeah and the green cans let's say. I think it's a database scheme. What is used for so this is the overview of changes. So let's talk 15 to one you may have heard it you may you may have not. So leave 15 to one you guys can see the screen right. So this is this is supposed to be an intermediate release and the release really depends it was originally aimed at October. I'm actually aiming it in the first on the first week of November. And it was supposed to enroll the jump concept into production before the next release happened. So, you know, with with like, I would say with some parts we are ready with some parts I believe that we still need some extra time. Like I would really like to discuss it with all stakeholders and and and you know like figure out if we receive no goal for certain parts like what's the condition. And if we can actually fix it until the release and so one. So this is still open topic whether the release will happen or not. We may also agree on small delay or we may just agree to go directly to 15 three. Let's let's one wait until Tuesday it was supposed to be Monday originally but some of the stakeholders didn't unfortunately have a time slot in the calendar. So we decided to go for Tuesday morning. The link for the interlock if you would like to see what we are signing up is is Cindy third paragraph. So that has kind of list of stakeholders that was sent to factory and project so far. There Miller was the person who wanted to be at it on the list so he's there now. If you want to be there, feel free to let me know I can add. Don't just, you know, like edit the wiki send me an email because there is also invitation linked with the virtual conference link that that will take place on Tuesday. So just just email me and I will handle it. It's intentionally placed after this conference right because you would like to absorb the feedback from the comment because this is the aspect that sometimes being skipped like we really want to make sure that you know we don't do any any harm to the community and everybody knows what's happening. You know why and they they see constant pros. So if you want to we can we can stay a little bit longer in the chat room we can talk about talk about it if something's unclear, you know, like, you can you can raise the concern and we can see if we can fix it, or if it was maybe already addressed. So open to the 15th three. So based on the open to the board this recommendation a decision recommendation to proceed with the jump concept or closing the lead gap. And it's likely to be fully based on Susie Linux Enterprise binary. So why fully based and what about the previous, you know, leave 15 to one what's the difference. So, I mentioned the unification of cold streams on previous slide that we really want to make sure that, you know, like, if you would build a package on leap and sleep like they would basically have the exact same outputs even if, you know, they wouldn't be binary identical they would have basically same outcomes. And and right now at this moment we are not done with all the all the packages so there was roughly 100 something I think 130 or 120 packages that were different let's say there was a glass trap support there wasn't you know on one side on the other and so one three Pascal compiler support in GB test results on the on leapside not on sleep side. So we are working them out so far only two features were ejected out of 130 so there is really like big effort from Susie to make sure that they adopt features, which is good some people were afraid that we will just start you know, like removing features but this is not happening and always when we actually drop feature like recently the SDL support in QM there is like notification to factory we talk to factory if it makes sense in this case, we decided to go in favor of GTK. So, you know, it was dropped from not only leap but also from factory so I feel like this is the best case scenario like how we can clean up you know the package. And so in 15 three we are expected to be done with all of these 130 right now I believe 40 is missing mainly the words tech so you know the word the word manager live word QM they would be a little bit different and sleep and and and jump. And that will change in 15 three. So far I haven't seen any feature to be rejected except of the QM you SDL. So rest of the rest I just expected to be the same which is good. That's beyond my expectations. And what did I mention, yeah, some people ask, when does the leap in three development starts. So we are now focusing on jump and you know like after the go no go decision that we will have on next Tuesday will basically either proceed with 15 to one and then or we will start development of 15 three right away. So this is basically why the 15 three is not yet set up in OBS because we were really focusing on the possibility of the intermediate release which we will know about next Tuesday. And to be to be fair, you know, there is some development already happening because if you know that it's based on suicide sleep binaries. And 15 three is actually being worked on as we speak so therefore for for 4000 packages development heaven and stop at all. You could say the same about factory relation to live but like, you know, it's not that we are frozen. So what about next next release. So that's actually really tricky because if you if you check the link that I shared this is the public presentation by kai duke about sleep 15 roadmap which is currently basically saying that you can expect five service packs and certain you know like dates, you know it's it's no secret right we have very predictable releases which are 12 months from each other. So you can sort of guess when the next release will be released. And if you if you check, I believe it was like 17 which was showing when usually in those future products and this is, you know, like just me talking to you it's not official statement or anything. It's not in the very last service back right and if you if you can expect five service packs then you know it will be most likely for or maybe with the fifth one if there will be a new product. And since I've mentioned on the very first slide that leap is based on the latest version of Susie Linux enterprise. You know, may happen that we will be based on not sleep 15 cost frame. And this is this is important to keep in mind. So I see a really cool option see that we will be based on Susie sleep 15 service back for if there is no next generation product and you know or we will be based on the next product in case that it's compatible with the type of distribution so if it would be let's say I don't know just just talking crazy if it would be rolling release. Does it make sense to have two rolling releases in open to them maybe it does maybe it doesn't. But like in this case I would consider maybe basing it's still on 15 as before, even when there will be newer product available. So I think setting stone, according to roadmap, like I told you the expected 12 months really cycle, but the development should start in next summer. So by next summer we should already know what's what's the plan, but until you know, as of today, like I still have these two options that I have to somehow come with. And this is basically it so now I would like to, you know, hear questions or complaints or concerns. Thank you.
I'd like to share current plans and known changes for upcoming release of openSUSE Leap. This is supposed to be a higher level talk without technical deep dive.
10.5446/54638 (DOI)
Yeah, welcome everybody to the discussion session with the OpenSUSE board. As everybody has noticed, we have this conference this year in a completely new format. First of all, we have it together with the LibreOffice community. And to my mind, I think this is a big win really, because not only that both communities are growing together, but there is also much more exchange than if we would just be on the OpenSUSE community or LibreOffice community or something like that. Then of course we have the case that everything is just virtually online. We used to have normally a board session face-to-face meeting for one or two days before. This could of course not take place. So we have all the preparation done via chats and telegram groups and something like that. This technical format now was a bumpy start yesterday. We had a couple of issues, but I must say the technical team has resolved this very, very well. So a big thanks from my and from the board side to the technical team, which was to my understanding mostly consisting of the document foundation people. That was a really good job. Thank you very much. So my name is Axel Braun. My colleagues from the board are with me. So what do we want to do before we have the question and answer session and the discussion with the board? I want to talk a little bit about our community, who we are, how many we are, how we are discussing, a little wrap up what has happened since the last OpenSUSE conference, then later question and answers. My board colleagues are in the discussion in the talk here as well. If you have any questions, please interrupt. I cannot see the chat bar at the moment because I have the screen shared. So how many are we in the OpenSUSE community? And the clear answer is we don't know. We have an indication and this is the amount of machines that are accessing our update servers. So these are only numbers that are direct or machines that are directly accessing the updates. Of course, we cannot count those machines that are updated via local repositories or local cache or something like that. And here we see we have an approximate number between 250 and 300,000 machines. So this is roughly an indication of the size of the OpenSUSE community. I guess it will be much higher but as mostly with free software implementations, you cannot tell because there is no obligation to phone home or to sign a contract or something like that. Interesting is if we take a look at the distribution of OpenSUSE releases that we're having and I have here two snapshots from about nearly two years difference. The first one is end of July 2018 and the second one is end of March 2020. So first of all, surprising half a year ago, we still had machines from the 10.2 release accessing the update. The 10.2 was released in December 2006, which is 14 years ago. I think we didn't have even smartphones, iPhones or something like that at that time. Nokia was really a big player. So there have been a lot of changes since then. But still half a year ago, we had about 16,000 13.2 machines, which was released in November 2014, which is six years ago. About half a year ago, we had still around 8,000 machines. One reason may be that this is the last release, a side of tumbleweed that supported the 32-bit architecture. And then of course, the majority of the systems is the release that is up to date at that point in time. Two years ago, it was 42.3. Half a year ago, it was 15.1. And you're probably now asking, oh, half a year ago, why are we not talking about actual figures? Quite simple. In the analysis, the 15.2 machines are not yet considered. So it looks like the amount of users is dramatically going down. But this is of course not the case. It's just that 15.2 is currently not in the analysis. Quite surprising for me, tumbleweed, we had about 50,000 two years ago. We currently have around 83,000, quite stable, the amount of tumbleweed users. I have no idea why the number went down. Maybe later on in the discussion, somebody has an indication for that. If we take a look at the box that we're having per system, per release, we can see that the older releases had around 2,200 bucks-ish, and they are mostly closed. So we have two bars here. The one is the total number of bucks. And the other one is the green bar is the one that is the explicit status resolved. And you can see those ones that are out of maintenance, the number of results matches approximately the total number. For the currently released 15.0, 51, 15.2, this is not the case. So for these releases that run out of maintenance like 15.0, there will probably be some buck closing session in the coming weeks, similar as we've seen it that Fedora does it. And that will clear the amount of open box against 15.0. An interesting figure would have been how many bucks against tumbleweed we're having. Actually, I don't know, because I can only query why the Buxler Webfront end, and that limits me to about 10,000 records. And I can tell you we had more than 10,000 bucks in total, and we had as well more than 10,000 resolved. So if there is anybody who has database access and could do a select count star that could maybe provide us with the number of bucks. But here in general, we can see the new leap release brought us a bump in the number of bucks, but since then it's going down and 15.2 has only a thousand bucks for the moment. I bet a couple of them will come, but in general it seems like the number of boxes is decreasing. So where do we discuss? Where are our users? So first of all, the largest mailing list is not on this paper. The largest amount of subscribers is in the studio express list with about 771,000. But curious enough, there is not a single email in that list. So I guess this is the amount of users, all respectively, the users that have been subscribed to studio at a time when it was existing and when it was shut down, it was migrated to studio express and the subscribers have been taken over there. So the mailing list with the largest amount of users is the security announce and the announce list with 2,900 and around 2,000 and then we have from the general list the project lists, currently around 694 users, which is a plus of 9% compared to a year ago where I took a look for the first time at this at this figures and the open zoos mailing list currently has about 1200 users. The related factory has currently about 1357 subscribers, which is a plus of 15.6% of a year ago and cubic, which is one of our latest products has around 90 at the moment. From the language specific mailing list, the two largest lists are the Japanese and the German lists and both have gained subscribers as well compared to the year before. And I think this is a good indication that in general, the open community is growing. Beside this, we have discussion forums like the discord channel. We have the forum, we have we have telegram groups and I've taken here just as an example the discord which has roughly around a thousand users. Next question, how many members in open zoos do we have? And here we are currently 512 open zoos members, which is a plus of 5.4% compared to the year before. And what we try to determine as well is the number of contributors into factory. So here Simon made an analysis, analyzed all the dot changes files, analyzed them for the for the identities that were checking those changes in. And we came around that we have approximately 3,300 identities contributing into factory. I'm saying identities because we cannot nail this one to one to individuals. It seems that some individuals have multiple email addresses. So let's take it as 3.3 thousand identities who are contributing to open zoos. This is way more than we're having members. So that means there is also some room for improvements for those contributors who are actively working on open zoos to become members. So what happened since the last open zoos conference last year in I think it was in May in Nuremberg? First of all, we had a change in the chair. It stepped down, Gerhard Pfeiffer took over. We had two board elections, one in January and another one in August. And the current members of the board, you can look up in the open zoos wiki. We have them on the slide. So currently we have in the board Vince, Marina and Simon. The latest one coming in was Stasiak, Gerhard as the board, the chair and myself. So towards the end of the year we will have another election because the two years board seat is over for Marina, Simon and myself. And those ones who have not made a second turn now may go for re-election. And if these are not the ones, maybe everybody from the community can make up a mind, his mind or her mind about individuals who he thinks that should support the board. So what else happened since the last open zoos conference? First of all, we had an Asia summit in October last year in Bali that was exceptionally well organized event from our Indonesian community. We had it in the university. The rooms were completely crowded. I guess we had a couple of hundred participants there, very many young people as well. And it felt like a real good presentation of open zoos and the community behind it. For this year there was already a plan to have the Asia summit in Delhi. Already early this year we had to decide that this has to be postponed to 2021 due to the COVID-19 problematic. And I think if we look how things are developing at the moment, we probably have to put a question mark behind it and have to see how things really develop, whether we can do a summit next year in Delhi or not. One of the last meetings that we have, or presence meetings that we have, was the FASSTEM in February this year in Brussels. We had a big booth, many people from the open zoos community were there and we also had a very good feedback and that was not only because of the BuildService beer, which we handed out for a donation and we then dedicated this money to the FASSTEM organizers in Brussels. Also was also a little bit the kickoff for our alignment and networking with the OFE, the Open Forum Europe, that is not for profit think tank, which is based in Brussels. And this is dedicated to explain the merits of openness to policymakers and politicians. So if you would call it a lobby organization for free and open software, it would probably describe it quite well. And we also aligned, we're closely together with the Free Software Foundation Europe. You may have heard the talk from Alexander Sander today about public money, public code, which is an idea which we definitely share. And of course we had again, just getting started issues with the Linux magazine. So they are bundling a paper, a magazine with a Leap DVD and explaining how to set it up and so on. This is quite handy as a giveaway for new users, especially to spread the idea and to spread the easiness of use for OpenSuser. What happened on the technical side? Yes, of course we had some 100 tumbleweed snapshots, ideally 7 in a week, which we mostly do not meet. But nevertheless, many, many tumbleweed snapshots. We have released Leap 15.2 a little bit later than planned because there were already the ideas about closing the Leap gap and jump. To sum this up again, up to now we had shared the code basis between SLEE, the Susal Linux Enterprise and the OpenSuser. And in the future, we want to share the binaries as well. So not only the code, but also the build will be equal between the SLEE packages and the OpenSuser packages, so that will save us in the end a complete code line with all the maintenance and so on that is required for that. So the full merge should be done with release 15.3 and an intermediate release, Leap 15.2.1 is planned for November, if I remember right. Probably the one or the other has noticed, oh, I needed a new account or I needed to log in again in a slightly different infrastructure. Yes, we had a significant change in our infrastructure. So mainly those areas where SUSE has carved out from Microsoft Focus and OpenSuser from SUSE with the impact that the forums, for example, have been migrated to Nuremberg. We have all the migration of the accounts, which was still with MicroFocus before to the SUSE infrastructure and the migration of the mailing list have been started. So we will get new mailing lists set up very soon. And this would have not been possible without the incredible work of the heroes, where I would like to really send a big thank you to the heroes team that did really an incredible work and to my understanding, I feel it went very smooth. Thank you. Yeah, we're just brought done additionally, we are working on getting more openness and getting more transparency into that. One of the major improvements here was that we have a new feature request process so that we now can submit feature requests for sleep products as well, which was up to that point in time only possible for SUSE employees. And now we will have this for non SUSE employees as well. The foundation initiative, which we already started discussing years ago and getting more in depth and face to face meetings last year was announced at the project list. We left out most of the interim steps, but came up with the idea to say, let's have a foundation set up for open SUSE because it has various advantages. And here it was quite interesting to hear an hour ago how Fedora is set up. They are in a similar situation as we have it now. Open SUSE is backed up by SUSE. Fedora is backed up by RedTed, it's respectively IBM, so this is an even larger sponsor. And as the presenter said, yeah, it has pros and cons, same as we see it here as well. So we have announced this in the project list and by intention, we hadn't been following up on that to see how this is being picked up by the community. And here we have to revive this a little bit because there was shortly a little bit discussion and then it faded out, but definitely it needs some distinct people to drive this discussion and to drive the development. We feel that this should not be driven exclusively by the board, so everybody who wants to help here should step up. Yeah, with this, I would like to hand over to the open SUSE share. Yeah, share. Gerhard. Thank you. We are preparing for this session, I was actually thinking and I realized in some ways I'm the new kid on the block. I've been engaged with Open SUSE from day zero, from what I remember even before, but the last year, mostly as a user filing back reports and not intricately with the insights of the project, like mailing lists and all those discussions. And so I figured I shared some of the fresher experiences. The first thing I wasn't surprised, but I was surprised how other people were surprised is actually the share role is subject to interpretation. I think many people, apparently at least within SUSE or sometimes outside, maybe less on the open SUSE side, tend to think the share is something like a CEO or CIO. And I can tell you, more escalation sometimes around IT stuff when we had to carve out our own CIO probably. It's not the CFO, it's not the CMO. I mean, the closest it sometimes feels is the chief escalation officer, which would actually fit into CEO. But really it's an interesting role and Richard truly can attest to that because there is all sorts of expectations and the only way you can lead is not by direct authority. That said, next slide please. It's an interesting role and it gave me definitely a lot more insights into open SUSE. And the first thing I realized is really there is a richness and variety and diversity in open SUSE that I would argue very few people actually are aware of, certainly outside of that community or communities. And there is a tremendous amount of passion. Now sometimes this passion leads to a lot of arguing. When you have passionate people on several sides of an argument that can become very heated, that's not always super constructive. There is a lot of openness I found, not necessarily always on the mailing lists, on the mailing lists when you have arguments and sometimes even inertia, but engaging with people one on one or in smaller groups in particular, that's where I, you know, whenever I had that I experienced a lot of openness and willingness to engage and willingness to share and help and also listen. And really open SUSE, one of the things that struck me in general is a deep sense of collaboration. And open SUSE is tricky, I've been doing open source, well I started with free software because open source didn't exist back then, but I've been doing free software open source for more than two decades. And open SUSE is definitely one of the more complex projects that I'm aware of. For many reasons, it's in the geographies, just the complexity of the different, of Linux distribution, but then all the tools that are part of open SUSE are that affiliated with open SUSE and the different communities and increasingly things above and beyond Linux, which makes this very interesting, very rich, but also means as open SUSE evolves, we need to step up. Next slide, please. And you know, stepping up in terms of the infrastructure and really as Axel mentioned the last year with all the carve out SUSE from Micro Focus on the IT side, open SUSE from SUSE to a very good extent, that definitely has happened. But it also means things like GDPR, as we move towards a foundation probably, things like finance, things like having even more elaborate election rules because as we realized this last year, there's corner points that we, corner cases that we actually had not fully covered, like what happens when, you know, what happens when the board is involved in a conflict, then the board can hardly arbitrary the conflict directly, etc. So there has been good progress now, would I have wished for things to be smoother? Absolutely would have wished for things to be faster and farther than, then actually materialized over the last 12 months, absolutely. But there has been progress. And one thing I believe, and I thought about this, what is the one thing I really want to suggest to all of us is related to islands. Next slide. As we have the, no, we have those different sub projects, the different groups and the speed language, geographies, interests, technologies, projects, etc. And in a way, those are like islands. And that can be a very nice thing. I mean, Indonesia is a country that exists of thousands of islands, still it is one country. And sometimes I'm missing is the connection and is communications between these islands. And I want to be very clear. I'm not proposing to use tons and tons of concrete and pour them between all those islands so that it becomes one big island and, you know, one completely consistent and 100% connected and same, same island. And that's not what I think will be healthy for the project. But sometimes I'm wondering, you know, can we use megaphones or messages in a bottle or bridges or little planes or boats to communicate more between some of those? I believe we could benefit as a project or as projects plural and as a community or as communities plural. If you were to share more, share more of our accomplishments, share more of our needs, share more of what we're planning to do, etc. I don't have this recipe where I'm saying, okay, here is the seven steps, here is the schedule. But I'm putting out the idea or the really the request is too strong, I guess, the suggestion for us to find ways to become even better, stronger in communicating. When you say communicating, that's internally, as I mentioned, but also externally. I think we have some very good activities. This conference is one, we are on social media, many of those media actually. And there's local events, there's people attending conferences on behalf of OpenSUSE. There is members and others helping newbies. So there's a lot going on, but I still think we could even do more in sharing in particular of our accomplishments and partly communicating, working together with other projects. So ramping up communications, parenthesis even farther further is one thing. And the other thing that I noticed is many of us have an engineering background. And much of what we do, this being an OpenSUSE project, we solve around code one way or the other and reviews of code. And when you review code, when you review a patch, you try to find mistakes. Because even an off by one error can be a security issue. Or a corner case can lead to the thing crashing. But that's about code. And when it comes to human interaction, there's something I felt and it actually has a new name as I learned recently, is the principle of charity. And what the principle of charity says is actually, assume the best interpretation of people's arguments. So when you have an argument on the mailing list or in person, don't try to find the weak spots. And don't ignore them if they are there, if they are problematic. But don't focus necessarily on the weak spots. Try to find what the other person actually is trying to do, is trying to relate to you even if in the argumentation or what he or she says, there is an off by one error or is something I don't agree with. And so that's, I was planning to send a note to the project list. I'll dig out some references and still do that. But that's really one thing that I feel ramping up communications even more, but also slightly adjusting how we communicate that really can bring us into the next 15 years and help our journey there. Yeah, and that was my brief excursion with that. Axel, now you can go to the next slide, please. I'd like to open the floor. I believe I've seen all board members and a fair number of other people and no by name here on the chat and in the session. So please shoot the head and ask any questions or make any comments either live or in the chat. Yeah, thank you, Gerrit. Yeah, discussion is open. I think you can stop sharing this. So I'm going to be the adventurous one here and ask the first question. So Gerald, now that you are the open Suza chairman, and you know, this is sort of your thing to be for better or worse, kind of the face of open Suza. My concern has been for a while now, you know, just not from you, but from your predecessors and stuff, the lack of visibility of open Suza to the greater community. Like people don't, like, what do you plan to do or what do you want to do to try to bring some more visibility to open Suza so that people will know that we exist as a community and that, you know, users will come and users will turn into contributors and we can sustain ourselves going forward. Because that's been my chief worry for a little while now with how things have been going. So to answer is I don't, I don't see the chair person. I call myself chair person or chair human, not chairman usually. I don't see the chair necessarily as the face of open Suza. I mean, he or she certainly is one face and maybe one of the more accessible or prominent faces. But all members and in fact others should not be shy of representing open Suza. To be very clear, that's not me pushing of a responsibility. That's just to invite everyone and not make this an exclusive gig. I can and I plan actually on doing, I can do and plan on doing more. I mean, I started the little amount of tweeting. Actually I increased that. It's about tweeting. It's about interviews. I mean, one thing that my role at Suza gives me access to is the press. And that's where I tend to bring in and up open Suza quite a bit in interviews when we know whether that's a topic that was originally requested or not. But I try to weave in open Suza. What would be helpful for me and that's actually the point about communications. If you and it's, you know, everyone who contributes something to open Suza, if you help and feed me with by sharing cool things that you do, then I can share those things in those interviews or other, you know, internally when we, when we will get probably US government approval pending, we will get those new colleagues from your venture. That will allow me to share more, right? So part of part of my request for communications is let's make sure more of us know more of the good things and the cool things that we do so that we can share them more. Yeah, I said, Neil, we've started a networking part in the open from Europe, regular courts and mailing lists. There are many, many users from other open source projects, but as well from industry, including IBM, Red Hat and something like that. So that of course raises the visibility and we've also worked on our connections to media. So I had a longer discussion with an editor from the German CTE magazine explaining him the advantages of open Suza and he finally wrote at least an article about Leap 15.2 and that was surprising in that way because up to now the CTE magazine knows only Ubuntu basically. So they're very resistant to what open Suza. We are working here on that to get a little bit more visibility. That's good to know. Thank you, both of you. As Gerard said, basically every community member can feed us with information but as well start tweeting and using social media and is private environment or whatever to spread the word about open Suza. I think that another point is to interact more with other projects, other community like we are doing for example for this conference with the debris office one because it's really normal to have several community members that are in all these communities and the link is already there. We should just use it and push it a bit more and maybe we could get better interactions and also better contributions. I mean I'm just thinking for example to the tools that we are sharing with Fedora or also with the debris office and we can just grow all together just learning each other what the others are doing. That's awesome. I read a question on expanding representation on the board where it is fully representative of the global community. I don't understand that question. Let me take that question. I think the question is around the fact that most of our board members are in Europe and I would strongly encourage in the next upcoming election people from all our different communities to consider nominating and running from the board or if you are part of a community in a region and you think someone else would be really good on the board I would encourage you to speak to them and try to get them to nominate. I think it's something we can improve because we have a very, I know personally I have spent a lot of time with our community in Asia and I would really like to get more of them actually run for the board so we could have a bigger, more diverse community running. Hey guys, I was about to say the same thing but I think Simon just highlighted those. Hey everybody, I'm Ish. I'm from the elections committee. I would like to add some information to answer the question. We the guys from the elections committee, we do have a lot of trouble to get candidates for the board. So here again I make a request to you for the next election if you want to see the global community being represented on the board, the first thing that you can do is present yourself as a candidate or if you cannot at least nominate somebody from the local community so that we can at least see in the board that we have people from around the world, from the different continents, from different places, different cultures and yeah that would be my part on this. Cheers. Yeah, actually the question came up on time zones. Time zones should not be a factor. I mean obviously it will be a factor but should not be a blocker. We have this strange Aussie here which means we have board meetings late evenings, German time or European time or early mornings, European time. I mean Hawaii probably. So Hawaii or French Polynesia might be really tricky but apart from that North America or other parts of Asia wouldn't and shouldn't cause challenges. Yeah but this is something that we have to deal with. Yeah, absolutely. And we rotate and I mean one question that I thought when I heard the question is should we have dedicated board seats per Chihuahua or so but that's also tricky to pull off. So I think the first thing I really think we should do is encourage and support behind the scenes maybe. People from other regions and GEOs. Right now we have two Germans on the board if I'm counting correctly. Two Germans but five Europeans. So definitely not very distributed. I'm sorry I mean I asked the question but I don't really want to cut my wife. But what I'm looking at is really you're all answering it like you know it's obvious we all just want to be represented. We want the community to grow and it doesn't matter where it is on the globe but sometimes you feel like how can we move that forward? How can we? It's one of those things that from my side I see it and I go we have this community here and this community here and how do we bring that together and that's really the hard answer. I've got a bit of an opinion with that if you don't mind me throwing in my two cents to the whole thing. Like I think part of the problem is like we talked about earlier like misunderstanding of what the role of the board is. If you see the board as representing the face of the community or if you see the board as leading the project then I think yeah there's a real problem with the board not being as diverse as the community is. But if you see the board as it's chartered to be as a conflict broker and troubleshooter chaos managers then the issue of representation is something that all of our communities can already do. All of our communities in the Americas, in Asia they can step up and represent the community now as if they were in the board anyway. They don't need to be on the board to do that. So I both kind of get at the problem and not like I know people see the board in that way therefore there wants to be the representation there but I think that's actually part of the problem. The board isn't made to do that. If it is then like look at the talk that we had from Ben today like yeah if that is kind of the role of the board then we need to have something that's way bigger, way more like the Fedora arrangement where you do have designated people there for designated roles and you have a couple of elected roles in there. That's a totally different scope, charter, election process, appointment process, blah, blah, blah. Nothing is either better or worse just from what we have right now. That's my view anyway. I think some of the misalignment here at least from someone outside looking in until Richard helpedfully explained it to me over beers one night. The misalignment here is that when people perceive how the board is chartered, as Richard said, it's chartered to essentially dispute resolution, final arbiters for people and such, they feel that in order for that to be successful there has to be cultural representation across the communities that actually opens Suza is in so that this kind of stuff works out a little bit more effectively because of course when you're talking about people and let's say someone from India has a problem with someone from Germany and there's nobody who understands the Indian guy's point of view, then it might go badly. Even if we're talking about specifically in the role that the board is chartered to do, to be the arbiter of the trademarks, to be the arbiter of conflict resolution, things like that, I think people feel like it should be in some ways culturally representative so that that function actually can work as intended, not that it's not working well as it is now, but I think some people feel that it can't because of the lack of representation across communities. Again, not saying whether it is right or wrong, that's I think where the underlying perception is. By the way, I just want to throw this in, I don't know if any of you watched Cobra Kai, you know, Karate Kid stuff, but it's awesome to actually see the other side of the story there. And no one, my generation growing up, never got that. So we always took Ralph Machio as a point of view and he was a good guy, but different points of view really make a difference in conflict resolution. Yeah. True. Maybe we should start this time around, we should actually start earlier for the election and invite people to think about standing and offer being, you know, I'm sure several of the people here would be willing to have one-on-one conversations if anyone has questions on the board or open so we say how it is to be on the board and put that out to possibly encourage. We should actually be starting that process in the next couple of weeks. Yeah. Oh, geez, that's right. If we want to run the election on time for the first time in the first time in as long as I can ever remember. Yeah, I don't think I've ever seen the big run on time. I think Ishii's hair is just starting to stand up steeper. I don't know if Ishii's hair is under Ishii's control. Ishii do a fantastic job as the realistic. He looks a little shocked. It's worked really hard this year. We could just fix the process so that it doesn't mention the new board should start on the 1st of January. Oh, well, I mean, now we're talking about redoing the charter. That's a bigger problem. Yeah, change the charter, then you have an election about changing the charter and then you can have your election. So it'll be the first of January 2027. I have been in a different organization, community organization where we had to change the charter to make the election happen because I think it's specified it had to be in. It had two conflicting things about when the election should happen. So we had to have a special meeting to change the charter. Oh, my God, that's terrible. I would say it's the year to do it. Well, I mean, if everything else is going to happen, we might as well do this, right? Thank you, Richard. We're now apparently going to change everything. Yay. Can we have the election date for one of those elections on the same as that one in like three weeks time in America? No, no, no. Just to really confuse people. No, no, no. Oh, no. You can't. No. No, no. No, no. Not happening. Yeah. Yeah. Don't be there, ruin, ish over this. I like ish. I think Richard only only is saying that because in those East cannot simply jump on the plane and hit him for for the next couple of months, probably. Yeah, but it's going to be some time I fly back to Mauritius and then I'll get off the plane and it will just be. I think you deserve it, Richard. I do. I totally do. Any any other thoughts, questions? Seems like everybody's reminiscing about their start in open Suza in the chat. Yeah, all those youngsters. I have so many more gray hairs than when I started open Suza. Hey, I joined open Suza right when the turmoil started with Attachmate and Noveltus deciding, you know, we're going to do stuff. Those were the fun years. Were they? I feel like those were the opposite of fun years. I had a whole like fork ready to go. Like those are fun years. Luboszak actually has a real seven or so box on his desk. Wait a minute, you have a thing that's not supposed to exist, Lubosz? No, it's Ross six because I started as Ross six. Well, I didn't. It was DTS and then I reached here, but I got it. Let me find it. I have it here, but it's kind of. It's the hammer. Oh, yeah, I am that one. Simon, I have that one. But it's in Italy, unfortunately, so I'm not sure if Ross seven still had like goldmasters, you know, like physical copies, but I think that it didn't. So this could be the last one. Is there still a plastic wrap around it? Yeah, that's a plastic wrap. Of course, you can't open it. You can't. I've got it back in the office in Nambag. I've got a legendary Suzer six box in the plastic still. I'm pretty sure it's propping my my monitor up. What? I have a thing. I got some stuff on this Swedish novel. No, but. Yeah, but the thing is, like I had an open Suzer 10 point something one and like it was just like a couple of like a centimeter or so, like too low and the Suzer six one was like just a bit thicker. So it worked out really nice. I always like not this 10.2, but not in our box. Let's see. Please. Please. Bitten is showing us, I think that's 10.2. Yeah, 10.2. That's a guy from Swedish novel when I have to get from like 10.1. Yeah, as we learned, it was released in 2006. If I still had access to my office, I'd show you my novel Suzer brain share hat that a colleague of mine and I got, you know, in 2010, 11 or so, like I was helping him in the university systems and he went to brain share came back and gave me a hat. And so I have a novel branded Suzer brain share hat. It's a set of copyright 2006 novel ink. Yeah, you know what? I was responsible for the, I was the project lead for the underlying slas version. All right. Back then we used Suzer Linux and open Susie kind of the betas when slas started the release candidate phase, we would do a Suzer Linux. Talking about novel brain share, I have my novel speaker share from like the last brain share and I loved the irony of like return and then it never did. Wow. Yeah. That was back when I had long hair and I was hanging out with two other people with long hair and like a random woman just walked up to us and goes, are you the Linux guys? And we looked at each other like, yeah, I guess we do stand out compared to everybody else here. You also have in your picture from those times, you are wearing the gaudiest sunglasses, polarized sunglasses I've ever seen in my entire life. What the green ones? No, the green ones with the yellow tints on the eyes. I don't remember them. I guess I don't remember them. Maybe for a reason. Your open Susie Connect profile still has that picture. No. Yes. No, it can't. You've got to clear the cache. Seriously? Take. Take a screenshot before Richard can actually change that. Don't go around. Take a screenshot. No, no. Our brand Susie has my usual pet shop. What did I? Oh my God, those, yeah. They're not good. I actually got them. They're cool. What is something in them? Don't diss my nice sunglasses. Did anybody see how many years before it actually happened, Susie already knew about COVID-19. Indeed, indeed. Oh, geez. No. It's not even my starter book. By the way, for those who didn't see it, this is the Richard I'm talking about. You're the biker glasses. Don't tell a biker. You're looking more as you're acting in tons of anarchy or something like that. Come on. Come on. Who called his long hair? This is not long hair. I think the long hair might have gone, but I've still got the sunglasses, all right? It's not long hair, Richard. It's not long hair. I mean, Richard, does the dude at openSusie.org still work for you? The dude at openSusie.org still works for me. Yep. Indeed it does. Oh, we see. No, no, no. There's Richard with the glasses. I put up the glasses, man. I'll grow your hair instantly. No. The two don't match. I'll just keep the glasses. No, these are my cycling glasses when it's sunny. So like, I'm not going to wear them again for a year. You look like you're about to go in for eye surgery. Who? Me? No, Richard. For eye surgery? I mean, because like, that, or you've just come out of eye surgery. That's probably a better way to describe it. I'm not listening to Neil Gompah again. Yes, you're listening to me, Nerf. Good night, Ignace. Sleep well. Okay. Thanks for putting up with this random segue. Just for the record, are there any other questions? Or is the beer session sliding in or whatever? I think we've gone officially into beer session.
This is the annual discussion between the Community and the openSUSE board. In this session the board will also share an update of the ongoing projects.
10.5446/54641 (DOI)
Just to make sure that you guys are seeing the screen in front. Yes, we see it. Excellent. So, I guess we are right on time. And we should start. Thank you very much for attending and thank you OpenSUSE and LibreOffice for having me. I'm going to be talking about new health and more specifically, later on, the talk on the mobile application and desktop application for the personal health record that will work on Libre phones and KDE desktops. So, going through the agenda, we'll talk a bit about the new health history. Then we'll get into the ecosystem components. We'll dive into specific on my new health and why this was created. A bit of the technical infrastructure. And finally, if we have time, we'll go for some questions and answers. Just a bit about me. I'm a computer scientist and a physician by training. And my specialty is into genomics and medical genetics. And into the activism, I love social medicine, animal rights and of course Libre software. You'll see that I don't really talk much about open source, but at the end we all know what we're talking about. And it's fine with me. People like to say open source free software. I like to call it Libre software. It's what probably gets more into freedom and the philosophy of our project. You can reach me there falcon at newhealth.org anytime you want. Just drop me an email. Well, new health project starts. Oh, Lujo is saying maybe it's issue on my side, but I still see slides from Richard. No, no, no. I can see Luis. Try to click directly on Luis icon. You are still locking the focus on Richard. Thank you. Okay, cool. So it's a very brief history of new health in the community. The picture that you're seeing there was the very first project in 2006. That's Santiago del Estero in Argentina. And we were actually going through rural schools and putting new Linux desktops on their classrooms. And then I noticed that these kids needed a bit more than just technology. And that's where all these, you know, rural medicine and social medicine concepts came to my mind. And that's where actually we started developing new health. And doing so, we created the new Solidario NGO. That's the NGO behind new health. It's a non for profit organization that works globally. And we are focused on social medicine. And social medicine is a very vast topic. Okay, from primary care to epigenetics. So it's very quite complex stuff. In 2011 Richard Stolman declared new health and official new project. And since then has been hosted at Genusevana by the Free Software Foundation. And it has many mirrors around the world. We are actually thinking on creating other mercurial repositories to host the different ecosystem components that we will see on the presentation, not just having it on one. And we pride ourselves to have a very nice friendly international community, people from Germany, Austria, Scotland, Spain, Argentina, United States. I mean, we have a pretty nice large community. These are just some examples of people using new health around the world from the Red Cross to hospitals in Argentina or in Cameroon or in India, from small primary care clinics to very large hospitals like in the case of AIMS in India, or nationwide implementation like in Laos or Jamaica. So depending on what you do, you will choose the packages that are needed for your institution, where you are at research institutions, where you are a laboratory, where you are a primary care center. And new health usually has the solution for to fill your needs. Now, again, you know, I try to kind of fill the gap between health informatics and social medicine, because usually what you see around is an over sophisticated system of health that doesn't really actually meet the needs of the population. So I think that by bringing social medicine, that gap is filled, that gap is met, and it's actually taking care of over 80% of the issues that create diseases come from social determinants of health. So we needed programs and we needed computing power to actually take care of those issues. As I said before, it's an official new project, everything and every single component, it's a live software. We have the package for open SUSE, and we of course have the source code at PyPy. Most of the code is Python, so you will see the different packages on PyPy. But yeah, we have some VUGS code for portals, we use Flask, and the documentation is currently on Wikibooks, and we will keep Wikibooks, but we will also have our own documentation portal that will keep track of every single version that we have. So these are some of the operating system databases and development environments that we use on the GNU Health ecosystem. So these are the main components. So you have a hospital management information system, you have a way of running embedded for domiciliary units and small centers, and also for laboratories and so on. Many people use the new health embedded solution, you have the limbs to interact with all the hardware on the labs. You have the bioinformatics package that deals with genomics and all these natural variants and mutation cancer research and rare diseases and all these things related to genetics. And one very interesting components is what kind of put all of these together that is the GNU Health Federation, especially for large nationwide implementations. This is just one of the components here, you'll see some screenshots from the GNU Health hospital management information system. From here you can do from diagnostic imaging to pediatrics to gynecology to histopathology. You have cameras, you have agendas and you pretty much can run a hospital, okay. You have stock management, you have bed management, you have operating rooms, you have pharmacies, labs and so on. So it's a pretty large program, but it's quite stable now, it's been running for over 10 years now, 11 years now. So this is what institutions use the most, hospitals use the most, it's the hospital management information system. But what takes us today is the upcoming personal health records. So up until now we have had new health focused on the system of health. That means you deal with health professionals, health institutions, governments, public health and so on, which has been very valuable for them to take care of their patients and their people. But we needed or I felt the need to give or to empower the person, the citizen itself and to make the citizen part of the system of health. And to do that you need an application, you need a desktop application and you need a mobile application, right. But I was kind of stuck because I didn't find any device that was free, meaning I didn't feel comfortable with either Android or iOS in terms of privacy. So I kind of waited to see the upcoming of a phone that actually ran Genulinux on it, but the code itself was open, right. And that's where PinePhone showed up and at the same time I met Alex Poil from KDE and he told me, hey Luis, why don't we do something on this platform and Kirigami and so on. And I said, yeah, well, it sounds wonderful. And I started learning Kirigami and Qt and QML and all this really neat stuff. And that's what we're doing today. So the idea is you need or we need an application that has to be easy for the end user, right. Everybody should be able to install it and to run it. But at the same time, it has to be good in privacy, right. You know that that's one of the key ideas of Libre software that respect your privacy, right. Or we at least intend to respect your privacy in the coding that we put on it. From the operating system to the application level, all the way around. This part of the technology that we are using at my new health, it's again, it's a Python application. We use Qt for Python. It used to be called PySci2. Christian is one of the leaders of the PySci2 project and he's also helping us out on the application. So we are excited about it. We use Matplotlib for charting in the same way that we use Matplotlib in the health and hospital information management system. That's the good thing about the Python library is that they are the usable. So we got used to them and it's a beautiful library and now we got to port it to my new health. Kirigami does a beautiful job because I'm able to run the very same code on the KDE desktop than on my Python. TinyDB, it's a database system, JSON based. It's a document-oriented database that fits quite well for what we want. Remember that this is a one-user DB, right? This is a DB that belongs to you, to your laptop, to your desktop and to your phone. It's not a multi-user DB. For those, we have another one that we'll see later. And GenoopeeG basically for encrypting and signing documents, right? So this is what it looks like. These are current screenshots from the KDE desktops. Basically, now we are using the bio, okay? And then you have the cycle, the social. You can upload documents and of course, if you are in a dive straight, if you have an issue, you can call emergency. Remember that we are working on social medicine, right? So there are a lot of things that are important there. So nutrition is important. Family affection is important. You have a lot of stuff that goes beyond the vital signs, okay? Vital signs are very important and in fact, we are putting some of them here in these charts. But they are just one part of the picture. And I just love it. I mean, Kirigami makes it really, really nice to work with. And the user experience is quite good. And as I said, those plots that you see there are coming from the matplotlib. And this is actually an actual picture that I took from the pine phone showing pretty much the same, okay? But this time on the mobile device, right? We are using a KDE neon, but it should be portable to any operating system that works on pine 64. And now, I would say that one of the good things about the new health ecosystem is that we use this concept of federation, meaning any of us, whether you are a laboratory, whether you are a person, whether you are an academic institution, or a hospital, you are a node. If you want to be part of the federation, you will become a node. And my new health itself, it's a node, meaning that you can say, hey, I want to share specific data, for example, my vitals, okay, or my glucose, my blood glucose levels. And you can send that anonymized, or you can send it to your health professional. And that's what makes you part of the system of health, okay? At that moment, you are part of the system of health, okay? Which is great. I mean, especially now in this context of the pandemic, you don't have to move from your place and send your information to your nurse or to your primary care physician. Or if you are in trouble, you can click that emergency button and we'll send that information. Remember, it's much more than just vital on clinical data, what you are supposed to put here. So basically, that information will go to the new health information system. And now we are talking of a very, very large database, which is also Postgres. So we said on the phones and on the desktops, if you are running your personal health record, you're going to have the tiny DB, that's your database, okay? That's where you're going to be storing your data and your documents and so on. But if you want to be part of the new health federation, that information will go straight, it will go through Thalamus, which is kind of the message server, the authentication and message server. And then from there, it will go to a very large Postgres document oriented database. We are using JSON fields, which allowed you to, the Ministry of Health, for example, to do very good analytics from it. Okay, so hospitals will use transactional relational databases. The system of health or the Ministry of Health are going to be using a more document oriented DB. This is pretty much what I was saying before, so every person will become a node. You can update to your health professional your information. The person is in control of what you share. Actually, you cannot share anything if you don't want to. And it will definitely decrease the load on the public health system, okay? At least that's what I try to achieve. And then after you have all of that stuff in place, this is kind of the good stuff that, whether you are a research institution or whether you are the Ministry of Health, you can start doing really good things in the genomics and medical genetics area. There are so many natural variants that they have unknown clinical significance. You know that there is a mutation, but you don't really know how much that mutation will impact your health. By having this very, very large databases, you will be able to kind of create or see this correlation between genotype and phenotype, and hopefully detect, prevent, detect, and treat better whatever condition it is in the areas of genetics and epigenetics course. So this is one of the really awesome things that we should be able to do with the Federation. And putting the citizen in place there will actually exponentiate the good things about it. And of course, you will have real time or the possibility of having a real time observatory and reporting. For example, in the case of Argentina, Gino Health has been chosen as the observatory for the COVID in interviews. So you have different health centers, and at the moment that you have a disease that is notifiable, whether it's tuberculosis, whether it's Ebola, whether it's COVID-19, that information will go immediately to the New Health Federation, and the Ministry of Health will know the case in real time. And that's key, okay? You don't have to wait a weekend to know the incidence or the prevalence of any disease. No, you will have that information in real time. So instead of becoming an epidemic, you can cut it short at any moment of the outbreak if you have the Gino Health Federation in place in your province or in your country. So what are the stuff that we are doing today? Well, pretty much we are now linking my new health to the Federation. That's code that is being in place. You can check the code at themercuryalrepository at thegenudatorg. And we should have a beta by December. Any questions that you might have, you can send it to either to me or to Info at New Health. Or just join the mailing list on the development mailing list. That's probably the best way if you want to develop and help us develop whether it's my new health or any of the other components of the new health ecosystem. To do, I was listening to Richard before, well, packaging is one of the issues always, right? What is the best way to package new health, my new health? Should we just do it a Pi Pi? Should we just create a Python package and just upload it to Pi Pi? Or should we have an operating system specific package, whether it is for free BSD or for open SUSE or whatever. I know Axel, it's already working on something like that. And I see his comments around there. That's fine. That's perfectly fine. As in the case of the hospital management information system, my goal is to kind of create a generic vanilla distro, right? Something that is valid for any libre operating system, right? So whether you use free BSD or open BSD or open SUSE or Arch Linux or whatever, you should be able to install it in your place with, of course, the right documentation in place, right? And then if any of you guys want to do, as in the case of Axel, want to do something on open SUSE, have the RPM ready, that's great for me. As I said, you know, in the case of my new health has to be something that is easy because people has to be able to install it. As a matter of fact, for the PinePhone, it just should come pre-installed. And then if you want to delete it, you can delete it, but it's a way of making things easier. Connectivity with open hardware devices and the documentation that is always one of the things. And security, right? Those are the things that are in the to-do list. And we have two minutes. This is one of the things that we have in Munich. We have an open SUSE-LIP community server so people can just log in there and play around with the Federation or the hospital management information system. And now that we have the my new health in place in a couple of months, so you will be able to send test data there and play with it. So that's pretty much it. Any questions? I'll be trying to answer. So thank you again for being here and I did really enjoy the presentation. So, Lujo, one thing, does Nuhal have the survey for new patients? I was just using the web-based server. So that's the question because this is the first time that I kind of used this interaction with my doctor and it was a basically survey for registration of new patients. Like regular one. And it was quite cool, but he told me that he has it for two plus years and I was the first person to ever use it. So that was quite sad, but you know, it's something like that part of your suit. So you mean a survey for... Exactly. Before you ever start with a doctor, you know, like he wants to get some info about you before he registers you and you kind of fill it and then you can already use this information to contain vaccinations. So I took my new vaccination document and I kind of copied everything in there and diseases. I don't have any, so it was easy to fill, you know, maybe some, if you have a license or whatever other, you know, do you have glasses and stuff like that? And he found it useful, but again, like nobody was really using it, but still if people would, it saves him a lot of time because he has to enter that information through system, right? Right. So that would go a bit on the concept of the book of life or the pages of life. Once you have your federation ID, you should be able to upload part of your clinical history to your GP. Okay, that answers it. Thank you. Thank you. Any other questions? Yeah, I saw there, vaccination, CCCs, exactly. That would be part of the city center and patient portal that you could also use. Great. So Sophie, yeah, I guess we are run off time. Does New Health integrate with digital? Yes, you have different type of authentication. You can use certificates or you can use password based and it really, really depends on the legislation of every country, but no, that's that's one of the beauties of, of, of liberal software. Right. Yeah, you can even be anonymous. That's in Argentina, I have treated people that not necessarily need to give your information. And that's, that's, that's good. That's respecting people's privacy. But I'm a bit wary of, you know, other type of authentication that is by private companies. So it should be a liberal way of authenticating as we do it today. But I guess we are running out of time. We can go to the chat room if you guys want and leave the others. Right, Doug. I think that we have, we have a presentation now. No, no, no. You can, you can go on. There is a, there is a break now. Oh, good. Oh, okay. Let me just check. But if I'm not mistaken, there is a break now. Yes, that's right. There is a break now. So you can go on discussing with the one. Oh, cool. So, so we have a, you know, some minutes to, to talk and discuss about the, yeah, I mean, the thing with healthcare deals a lot with legislations, you know, current legislation. Some people say, well, you know, I, I don't want to pass any of my information or I want only this set of GPS to know about my clinical history. How can I do that? In the hospital management information system, it's very flexible. Okay. We use Triton as part of the framework and, and at that level, you can pretty much said authentication and this sort of ACLs at the level of draw at the level of field. And also, depending on what type of values do you have on that field, people should be able to access it or not. That's, that's, that's doable. The thing is, usually each country has its own set of rules when it comes to health data. And we pretty much have whoever implements that new health in that country has to abide by those rules. Okay. And what, what I like to see the scenario I like to see is, you know, I don't like, for example, to host the Federation in any private cloud servers. You know, whether it's Amazon or whether it's Google or whatever. Because of, because of the type of data that you know, these guys live on data and, and, and we rather have our own cloud, if you will. But it should be private. Okay. And that is something that is part of your health and your physicians and the system of health where you are. No private company should be even, even hosting your information. Okay. But that's, that's a personal choice at the end of the day. It's in new health, it's, it's free so far you can put it wherever you want. But, but it's important that we have that in mind, you know, at the end of the day, it's, it's, it's health, right. And, and the weakest link will probably be there where you are actually hosting your data. Other than that, you can encrypt at different level. You can use different encryption mechanisms. It's, it's, it's really in libraries. It's, I'm using GNU PG and I'm using the creep, be creeped for, for some stuff for hashing. But, but if you want to use another library seats, it's, it's doable. The goal at the end, the goal that the end on this is we want to have an ecosystem where from the very smallest component, which is the citizen, going to family, going to society. We should be able to act in a, in a way that we can prevent that we are not doing reactive medicine as we are doing today in most of the countries. And the COVID pandemic has been probably one of the best examples that have shown us that we are not doing good public health care at all. Good public health care deals with prevention deals with keeping the people in a healthy state and not just curing them in not just, you know, people go to the doctor when they feel sick. And that's the wrong approach. People should go to the doctor to keep them healthy. Because if you are sick, something is wrong, right? Something probably we should have done something better to prevent getting sicker. And again, you know, the way you eat, the way you sleep, the affection, affection level that you have at home, the, the, the vaccination that those kids should have had that they didn't have, probably is what they are making them sick. You know, there are so many things, the education levels that you have, the nutrition level, all this stuff is in new health. And then yes, of course, you have state of the art genomics and clinical genetics and so on. But new health is about social medicine is about primary care. It's about keeping a society healthy from all these socioeconomic determinants of health that are being ignored most of the times in many countries. You know, people think, oh, well, we have the latest MRIs and we have the latest tomographers and yeah, but those only most of the time detect a cancer that might have been preventable. If we did good preventive medicine, if we did good public health program for health promotion and disease prevention, and that's where I want to focus with new health. And with the application that we have with my new health that I think that that is going to actually put the person on the driver's side. Right. That's on the driver's seat. That's he or she is going to be part of the system of health, meaning now you are also responsible. You are not only taking orders from your general practitioner, but you are also going to be responsible for your health. And, you know, I'm quite excited on having this new application. You know that there are thousands of e-health and mobile health applications out there, but very few take this approach and actually connecting it to the system or to the public health system in your country. I think it's going to make a very important difference and let's see how it's taken by politicians. You know, that's one of the things that we have to do. I see that Alexander is here with these guys at the Free Software Foundation Europe are doing a great campaign of public money, public code. And that's what we should talk to our politicians and talk to them, you know, say, hey, you know, why don't you use Libre Software for healthcare? What is stopping our countries to use free software or Libre Software in healthcare? You know, the tools are there, you know, and if it's a public good, it's, you know, public healthcare should be universal. And we are in Europe and our politicians should legislate to use whether it's new health or whatever, other, you know, hospital management information system that they want to use. As long as it's Libre, it's fine with me. It's fine with me. We are a social project, you know, we are a social project, we are not into informatics, we are into social medicines and we use, you know, we use technology to deliver this social medicine. Let's see what that says. To most things, I just had my, to call my local government to deliver privacy policy in third party, using Collector for more, yeah, exactly, you know, it's always the same, we don't know what's going on. When they came with the COVID application here at the beginning on the phones, it was a close source application. And I started tweeting and calling and at the end now it's Libre Software, it's the code is there for the COVID-19 mobile application, you know. And that was good, that was great, you know, but if we wouldn't have called them and tweeted around, that would have remained closed. And, you know, I don't want a close source application on my mobile phone, especially on something so critical as healthcare, you know. And that's why I want to move or I want to have these applications on phones that I know every single line of code of the operating system, or at least I have the choice to go to any single line of code of that operating system. And you cannot do that with Android. And of course you cannot do that on iOS. So now with PinePhone, I think that we have a very bright future, you know, we have to support this sort of projects, you know, because that's the phone that I want to use on my daily basis. You know, I feel very comfortable using, you know, whether I'm using OpenSUSE on my laptop or I'm using FreeBSD on some of my servers, I want to do the same on my phone. And also, in the next step is any single hardware device that is related to health should be OpenHardware. I want to communicate to, you know, my pool of symmetry with something that I know that is actually sending that information only to me. Same with all the glue computers, you know. I don't know what are those doing, you know. I want to have all the circuitry of that glue cometer and all the protocols should be open. So we can connect now the PinePhone and my new health to the Oximeter and to the GlueCometer and to the Tenseometer, you know. That's beautiful because if we do that, we are not going to be making mistakes when we are punching in the blood glucose level or your systolic or diastolic pressure, you know. If we have something that automates that and is OpenHardware, then we have the whole circuit, you know. Then we close the whole thing and that's what we need to ask. And there are some of them already that are open in that sense. Oh, Windows XP? No, thanks. No. Anything that is... And that's, you know, again, if you want to use it, it's up to you. I mean, you can compile. I mean, it's Python code. But the thing is, you should, with some degree of work, you should be able to run it on other non-free operating systems. But why? I think that we are also sending a message to our society, you know. I know, it's terrible, but that's what many... Or I would say that's what most of our governments are using today. They are using non-free operating systems to run our health and to store our data. And I think that we have a moral issue with that. It's our moral duty to talk to our governments. And, yeah, exactly. You know, that's what happened in the... I think it was like 16,000 records in the UK of people that was positive in COVID on the SARS-CoV-2 were not entered because they were entering the information on an Excel spreadsheet. You know, it's appalling. You know, it's appalling. I mean, why doing so? It's just crazy, you know, because at the end of the day, there are good politicians, but there are also many ignorant politicians that are running our healthcare systems. And, you know, it's our moral duty to take them away from the society. Because at the end, healthcare is not much about technology, but it's about good policies and good, you know, equity and universality and good measures on that. Okay, Axel, after complaining about the use of the... I got the reply. Yeah, sorry, that's in German, but it just means in public domains, people mostly really don't care about privacy or something like that. They say, well, if they've signed the contract and they say they're sticking to the law, then we're okay. But that reminds me on another sentence in German, kind of a der Absicht einer Mauer zu bauen, this is what the leader of the Flammar-Jum, Democratic Republic said, we are not building a wall once they're starting to develop the Berlin Wall, right? So we must raise much more awareness in the public sector that these proprietary vendors are not necessarily the place to hold our data. Yeah, and you know what the guys at the FSF are doing, I think that's a great campaign and I think that we should all support them and work with them. They have good experience, they have vast experience on talking to politicians and to the policymakers. And sometimes politicians don't even know. You know, so Gabriele, I made the Italian voiceover of the video. You know, public money, public code, I guess, right? Oh, okay, yeah, exactly. That's a beautiful campaign and we should take it to all our politicians, you know, and set up a date with them and say, hey, listen to me like what we did in Düsseldorf last time, Axel, you know, it's like, give us 15 minutes of your time. Give us 15 minutes so at least you get the grasp of the importance of liberal software in the public administration, whether it is education, whether it is healthcare, no matter what is public money, public code is so powerful campaign that we should be, you know, we should really, really endorse it. So guys, if you know, if you have any questions, you can also ask Axel, by the way, he's part of the GNU Health project and or to me, or just join us on the mailing list or on the telegram. We have a telegram channel too. And we will have in November, November 20 and 21st, we'll have the GNU Health Con, right, the annual, let me just put it here, conference that we do every year. So we love to have you there. And we should be by that time to have something in place, some demos, so we'll take the pineforms and play around with them. So it will be online, of course, because of this context. But I think that, you know, you of course will get a much better idea of the project itself, not just on the mobile devices, but its philosophy and what we are doing around the world with the presentation. So thank you, everyone, guys, and I'll see you very soon. Thank you. Thank you guys.
GNU Health (GH) the Libre Health and Hospital Information System. GH is a social project that combines the socioeconomic determinants of health with state-of-the-art technology in bioinformatics, LIMS and genetics. The GNU Health ecosystem works in the areas of demographics, socioeconomics, epidemiology, patient and institution management. It's been deployed in many countries around the globe, from small clinics to very large, national public health implementations. MyGNUHealth is the GH's Personal Health Record application that integrates to the GNU Health Federation, focused in mobile devices. We'll talk about what lead us to choose the KDE Kirigami framework to develop myGNUHealth, some technical insights and the community behind the project. In this talk we will talk about the benefits of Plasma mobile and the Kirigami framework provides to myGNUHealth. After a short introduction of the GNU Health philosophy and ecosystem, we will focus on the need of the Personal Health Record (myGNUHealth) that can be used both in mobile devices and desktop and the benefits it delivers to the person, the patient-doctor relationship and the system of health in general.
10.5446/54642 (DOI)
Hello, this is a talk about Oaxamal and PDF digital signing intro and elsewhere. This talk is meant to be focused on AX509 certificates, creating signatures using that and verifying those signatures. So I won't really be talking about all this GPG-based signing, which is a different piece. Regarding me, perhaps I'm already familiar to many of you. I'm Miklo Shryna from Hungary. I'm a big collaborator for many years now. I started here as a G-Soc student around the writer RTF, important export. Nowadays I mostly do things around writer. So let's start with an overview on regarding what was already available as digital signing as a feature set in OpenOffice.org and how we developed new features on top of that and then I don't really reach what's new this year. So the digital signing we had in OpenOffice all times was just limited to ODF signing. We literally had hardcoded conditions in the code saying in case it's not ODF done, it's impossible to digitally sign something. And when it comes to signing, you always have to decide what's the hashing algorithm you use to create some digest from your original content and then you actually sign that digest. So for digesting, only this order MD5 and SHA-1 was supported, not the newer SHA-256 or anything better and also just RSA was supported, so the newer ECDSA or anything else was not supported. Regarding the verification and the processes somewhat straightforward, so it can do checking if the digest is machined, which means in case you modify the document as an attacker, we have a different digest compared to what was signed and we can detect that the document is modified. It can also do certificate validation, see that we have some trusted root certificates and there should be a chain from the trusted root certificate to your certificate and in case there is no such chain done. That also fails the validation. It's also an interesting attack that you can append some new streams in the Z package or basically put data before or after the signed content, so we also check for that and as mentioned, this is all done by an X509 certificate, so no GPG that was added later. The first thing I added four years ago was a Waxema signing, which is basically digital signing for DoCaX, XLSX and PPTX formats to improve the interface with Microsoft Office. This is somewhat similar to ODF signing because it is building on this W3C specification, the XML digital signing core specification and ODF does the same. So at the very bottom layer, we are just signing a piece of XML fragment and how we do that is actually the same for OXML and ODF. One interesting part here for you as a user is that the never signed metadata for OXML files. This is probably because Microsoft wants to upload the files to SharePoint and tweet the metadata there or something like that. If you open the files in Libreface, then the Libreface standard is that metadata should be part of the signature. So actually, OXML signatures won't be recognized as perfect. The best level you can reach is that you have a partial signature. It's important that we are meant to read what Microsoft Office writes and also we should produce something that they can read. What's somewhat interesting is that there are these different transform algorithms in the XML XML spec and compared to ODF, there is a special one in the OXML spec. There is the custom algorithm and we offload most of this XML signing work to the XML stack library and that has a hard-coded set of transforms that it's supporting. So I had to go to the XML stack library at support for these relationships transform algorithm because that influences what's the input data that will be used for hashing. Once this was contributed upstream, then we could use it for OXML signing purposes. Suddenly the OXML signing markup is a bit awful. Another interesting piece is that it leaks quite some of your software and hardware details like you are supposed to write your Windows version, Office version, Microsoft Office version, how many monitors you have, what resolution your monitor has, and so on and so on. It's a very interesting question, what's your Windows version if you use Librofix on the Mac or Linux? So we have some hard-coded stubdarms that we please what Microsoft Office requires there but it's not really leaking your data. The next step was that PDF export has an optional way to create a signature during the creation of the PDF file itself. This was naturally done as a GSOC project and then a Torli request completed this. And because we had some customer who wanted to try this to completion. And what we do there is basically we do the PDF export, we write placeholder for the signature and then we do the standard binary signing using X509 certificate. This PKCS7 spec defines how to do a binary signature on the hash of this original content. And then we do a hex dump of this signature and put it to the placeholder inside the PDF file. And what's not used from the placeholder is just field-up with padding. So this is for new PDF files and for signatures. Then we wanted to improve this so that we can also verify those signatures and this requires quite like digging into quite some layers. So first I wanted to understand what existing PDF parsers we have in LibreOffice because of course we had multiple ones. So at that time we had three. Popular is used to have some editable ODG file out of some PDF input intro. Then the primary problem there is that this is not available in OBIOS in case you are focusing on NPS. Absolutely, this is just not there. We have some quite hard to read boost-based parser which is as I understand it mostly used just for hybrid PDF so that you can get back your original write or or calc or in press document from some PDF file but in case you have no LibreOffice around you still have the PDF data there. And I checked what's the situation with PDF. But at least back then it had no API to extract all the signature details which is needed by us. So we needed some solution where we can just build the missing piece ourselves. I wanted to add and I did the first PDF tokenizer but this one is really just tokenizing the PDF data. It's not really parsing what's in the object streams which is probably the harder part of the whole PDF parsing. And the basic verification is not that complicated. We need to determine where is this signature inside the PDF file and just have hash everything before and after that. And determine if the digest is matching or not. But then of course you can make things more complicated. You can have multiple signatures in a PDF file and the signatures are changed by definition. So the second signature is always including the first signature as data which means that previously I mentioned we want to have a signature which is covering the complete document. So if signature is partial then we consider that as a failure. And for PDF you can't really do this because technically everything except the last signature will be partial. So we do some middle ground there. We try to find out if actually the first signature is partial only because there is a second signature added or perhaps there is some incestate content between the signatures inserted by some attacker. And in that case the first signature will be partial. So that's a bit complicated and it's a bit sad that these hacks are required for real world multiple signature usage in the PDF format. So once you can verify a PDF signature then of course you want to create PDF signatures. And I sent creating new PDF files with signatures in done that was already supported. But then you can also take existing PDF files and perhaps the user size I just want to sign this. So this is not working but all combinations of Librofist creating PDF files Adobe Acrobat creating PDF files and then creating the initial signature then second and third signatures and swapping between the two software. So that's lots of combinations. But I believe now this is working nicely. The hard part is really that we are expected to parse random PDF files and this is a much larger, much richer markup compared to just the subset that we are producing in our PDF export. Previously it was only necessary to parse what's produced by our own export. One thing you can do on top of existing XML signing and PDF signing is that there is a set of recommendations on top of the XML ID SIG recommendation or on top of the PDF SPAC that this XSODAS and PODAS signing which has the promise that in case all conditions are met then this can result in a signing which is actually legally binding which makes it very much interesting. So we had a checklist of what's obviously missing from Librofist to create such signatures. One thing was that the SHAW256 support as a digesting algorithm was missing and now it's possible to do that. So just RSA was supported, so ECDSA support was added. And one very important piece is that you have to make sure that not only the certain private key was used to create a signature but also what was the original signature because like there is a trap here you can have the same private key in multiple signatures but actually multiple certificates but actually the certificate contains your name and your other details. So as an end user you want to have some signature which actually ensures that this certificate was used for signing and the original digital signing is actually not providing this. And the bottom line here is that when this work was finished then there is some DSS digital signatures service validator which can check if you are confirming to different baselines of this PODAS standard. We are passing the basic checks there, you get a nice great check mark. The use this here is that in case you are doing signing of existing PDF-wise then so far we were signing some kind of stub signature widget which was on the first page, zero size top left corner and now we can actually create some visible signature widget which is semantically associated with the actual digital signature. So you get a user interface which is quite similar to existing signature lines in writer or impress and draw your signature rectangle somewhere, you get a nice vector on graphic there, use the correct PDF markup and once you've drawn some rectangle you can find you that in case the size or the position is exactly what you want and then you actually do the digital signing. As a combination of these I believe currently this is a bit better than what you get from docuSign or Adobe Acrobat. So that sounds pretty nice. The question is like in case you are more interested in the technical details how all of these features are actually implemented. So one thing that we added was signature descriptions, Iwaxamal and PDF-4D has markup for this and we were losing this data. You can also refer to this signature commands or signature reasons. I have the entire block was dedicated for this, it's on the slide, you can click on that in case you are interested in even more details. The point is that this way it makes sense to have the same signature for same signing certificate in multiple signatures because in the description you can actually state what's the reason of your signature signing and perhaps you want to first state reason A and later reason B and then use the same signing certificate for the same document multiple times. You can also do the import of the signatures from Iwaxamal. As mentioned this required an implementation of the relationship transform algorithm. That was a bit tricky because the ECMO version and the ISO version of this algorithm is actually different and I believe there was a bug in the ECMO version so if you implement the ISO version then you will get the same result that Microsoft Office has. There is a small SXPARS in the in AXAMAL security to actually read this Oaxamal signature and this was the first case where we are still we were still just supporting ZIP based formats like Oaxamal and PDF but there is no longer a hard-coded saying if this is not ODF then it's impossible to digitally sign this format. Then once you could import and verify those signatures you perhaps want to add your own ones. So in the Oaxamal case each new signature has new XMF string in the ZIP package so that's somewhat nice because you can't easily break existing signatures or actually it's harder to do that by accident on the other hand it requires some bookkeeping on how these signatures are referenced and so on. Also some refactoring was done here so that most of the signing logic is moved outside the dialogue so that it can be triggered from CPP in it tests. Following the verification of existing PDF signatures this is just happening automatically when you are opening some PDF file and we have some UI where you can actually try to discourage users from editing the file in case it has signatures because in case you are editing the file then you will lose your signatures. I think PADDES support require basically improvements to the hashing and encryption algorithms that we support and also there is some SPAC on exactly how to embed this signing certificate to the existing binary signature and if you implement that then you get this nice green check mark from the DSS validator. There was a separate tab for this exactly the SHO256 and the ECDSA support. I was personally interested in that because there is some Hungarian electronic ID you can get as Hungarian citizens and there is a signing certificate on that and if you get some certificate reader then you can actually use this for signing and this is recognized by the government and whatnot. So some real hardware based signing. I know this is working on Windows and Linux and what was really challenging there is that ECDSA support is not working in this order in this API that we were using for encryption and hashing. So I know we wrote that part to use the Microsoft cryptography next generation, this CNG API and that's working nicely. The last piece was this visible PDF signing where I tried hard to reuse existing code. So this is very similar to signature lines but you may know already from Brighter and Kog the generated visible signature object is actually reusing the existing export shape to PDF functionality so it's nice vector based and also then we are copying this PDF object from the shape PDF to the final PDF using code which is reused from the insert PDF image functionality. Last as always don't forget that Colabora is an open source company so what we do and share with the community has to be always paid by somebody and in this case the Dutch Ministry of Defence in cooperation with now and of a small Dutch company made this work by Colabora possible. The majority of the functionality presented in this call was paid by them so a huge thanks to them and this is a great feature set and it was possible due to them. So as a summary the good news is that compared to the original OpenFish org feature set we support this Xodes and Podesta extensions, the baseline Spark, we support modern hashing and algorithms, you can sign not only audio but OXM and PDF wise. This is working nicely to the matching products like Microsoft Office and Dacrobat for us and the new this year was this visible PDF signature. Thanks for listening. Bye bye.
LibreOffice originally only supported digital signing for ODF files. Collabora later extended this support to cover OOXML files and also signing existing PDF files in Draw. The latest news is adding visible signatures to existing PDF files. The talk will walk though the steps which were necessary to add support for this feature from document model to layout, from UI to file filters. Come and see where we are, what still needs to be done, and how you can help.
10.5446/54645 (DOI)
형ma People Operating Про포버로eee 안녕하세요. 저는 태현입니다. 음... 응<|transcribe|> 개영용ες 연 butter 코리아 inn.之 젠 cubit 가천의 Dockerosten잖아요ograf towers 전직 그리고ulan 스꺼 3. CJK 커먼의 3 son는 4. committed OSR 2 3. 3는 QQ 그리고 iTunes γίν Rosa 로 Kaera 귀합니다.ajo요.nova inmnels은 강당 Alberto triggering 모두가 한국의 experiments 이 included 한국말 많은 basics beautifully 만나 Hebrew 제가 팬은 제 프로젝트는 15년 전의 뷰러스 뷰러스의 뷰러스에 제작되었죠. 뷰러스의 뷰러스는 KLDP 사이트에 한국 뷰러스 로켓몬트 사이트에 제작되었죠. 40년 전, 한국의 뷰러스에 KLDP와 PONT, 한국의 뷰러스는 한자에 의해 고수의 뷰러스에 제작되었죠. 한국의 뷰러스는 KLDP 사이트에 한국의 뷰러스 로켓몬트 사이트에 제작되었죠. 한국의 뷰러스는 한자의ọng에 대해 제작되었죠. 한국 Caucasian 등주필의 뷰러스가 양 pairs 사이트에 최종 정안 Careful 조� manten 한국 Shorekan 내가 Chungha 채 coeur 이 ö140으로 동자재 basis의 광경과 Several Range의 주행을 uni춘tes에 YouTuber가 한국인 한틀 사운드 낙�일은 � repercizalf paragraphsering but one to one 자학 advocates 부 Meanwhile reside. 악 마 now seeing stories. 되게 이렇게 한국의 오픈소스 개발자에게는 HWP 오픈소스 라이브로디가 이미 개발한 것입니다. 하지만, 한 명의 오픈소스 라이브로디가 HWP 오픈소스 라이브로디와 함께 HWP 오픈소스 라이브로디가 HWP 오픈소스 라이브로디가 이미 개발한 것입니다. 이 사회가 학습 합류할 기간이lios and � insbesondere 구논키차람에ulyPreula 그리고 한othing прот sections Man識한 방어로 semantic Mona, 이 한국 사� Intel 여기에 바다 mend 기 taxi KK 케이트나ます 저는 그의 개념이ching에 맞추는 interestingly 레시피ے 사이에 famous İyi 소유rophies 사위는 leader 그리고고얘사이 기ogr nug Tibetan 시성관은 2017년iani의 채널을 오 outfits.com Qрим Qрим Qрим Qрим 이탈리아, 프랭클롬, 정치아청, 제프한, 타이밍, 타이완에 왔습니다. 이탈리아의 프레젠티를 제공한 후에, 일본의 멤버들을 더욱 만났습니다. 이탈리아의 프레젠티를 제공한 후에, 이탈리아의 프랭클롬과 이탈리아의 프레젠티를 제공한 후에, 2017년, 프랭클롬에주 групп했 proxy recommends Mr. JK Taklam, 제프한에 restraven permite 전체what of Quezon Tea Fires 뭔가 시작했죠. 그리고ilation vs. peanuts vs. C.J.N.T akimayak ins 모자 Suzuki 이중. 이탈리아의 파티 sampai 제인 exemplars ب까оки 2년서 SECRET çocuk Promisełada와 Pawn俱역과 더 worlds 정책을 Honey sources See you in class 인터뷰아틱은 T-Rap 멤버들과 인터뷰하고 한국 오픈소스 소프트웨어 커뮤니티에서 인터뷰하고 있습니다. 이건 제 첫 커뮤니티입니다. 한국 한차 소리가 들리는 것입니다. 이건 커뮤니티 입니다. 한국 한차 소리가 들리는 것입니다. 이건 유전 프로의 전화입니다. 한국 한차 소리가 들리는 것입니다. 한국 한차 소리가 들리는 것입니다. 한국 한차 소리가 들리는 것입니다. *** 그냥 paglvic.com According to the identification nationally, Harvard Processing that interpret piles. Difficult, he said..... To show file name. filetopiletopiletopiledlocument 글라데이션 Planning 언론ura 김정 experienced 구멍 이 내려갔고,于아 situation ersten paralyzedShell dev. стать last year I became a TDEP Member. This is an example of my name. This is my name and this is Nyn autism member oroks Cam Animals list. maŌ RhŎ RhŎ RhŎ RhŎ RhŎ Rhņ RhŎ RhŎ RhŎ RhŎ RhŎ RhŎ Interview with TDEP members and published the documents foundation blog. Certainlyhua Community Member Mondayаль森 주ying 중이에요.ervice behind a standard is life our internet conference 한국의 글이 말여 그러나 portjiol 제 compr clinical definition. gosto me 방구 귀여움 황 matter 더 makers measured backed 와 8 펑터 b 태국 над여진 파胡이 원 부담스러시�อ�� UMG 부담스러시리가 그래서 물 amber 인조抽절에 감췄어요. 대한민국의 한국 유부 산소가idos comme pour de lining mas pow<|vi|><|transcribe|> phagner Mucha 한국과 일본을 확인해 주셨습니다. 그래서 한국과 일본을 확인해 주셨습니다. 나루히코 오가사하라와 이 첫번째 사업을 제작한 나루히코 오가사하라와 오가사하라와 오가사하라와 오가사하라와 오가사하라와 오가사하라와 오가사하라와 프리미안아, 프리안 피처를 이건 개릭 리뷰 시탱입니다. 나루히코 오가사하라와 오가사하라와 오가사하라와 오가사하라와 오가사하라와 오가사하라와 오가사하라와 아� ول pull yourthe issue Tales Group is conten during we 나는 일본과 전공 computation healthy бы 이건 유대형 hacking Counter Fawn 이런 예술을 합주했고藩오 MendT 시제이�ет 포 нам 여러분께 cheap Viktor 소신� weed прямо metrics 가시게 된 광화와 �セ미UUå 혜�ть 뺏 감성 나이펄 나이파의 유지원을 제공했습니다. 나이파의 유지원을 제공한 연구자들과 한국의 미니스토리 사연과 아이스티, 나이파의 유지원을 제공한 연구자들과 한국의 기업, 기업, 기업, 기업, 기업, 기업, 기업, 한국의 기업, 기업, 한국의 기업, 무완의 세계계 celebr 對啊 솔으시 1965년 걸 Wonderful 깃은 작 gal치와 유해 PipOLL alerts의 종인업에 따라 안전 sembla 습관이 원래 ido긴 2인az 진겸 그랬어요 에스카 организمةbauي Sharon See-bright Pitts concerning s d k 아니 프로ve 않 e th Blizzard의 1ầu quoting는 한국, 윤탄플енные이기 통Looking Idiograph. walked it was a proud document. tall people want to use,uzu아재 freshmanigo 잔ons through the mall, platform, pergi구중점. fase jag7 Gelner 철 ما는 망할이 syan 어떤 해snarn preferences полhö라레로 animals 초ונה연 locker 아이즈안 레이전스 그리고 넘� çıktı 바 Hz ، 그 다음 subt��úa 여기가 한국의 특징을 보여주기 위해서는 이 example입니다. 바닥에 두고, 왼쪽으로. 여기, 여기, 여기, 여기, 여기, 여기, 여기, 여기. 그리고 한국의 특징을 보여주기 위해서는 이 example입니다. 그리고 같은 example입니다. 대구한교, 대구한교. 여기가 특징을 보여주기 위해서는 이 example입니다. 여기가 한국의 국가대구에서 한국의 특징을 보여주기 위해서는 이 example입니다. 여기가 WCC, 여기가 유닛과. 그리고 이 특징을 보여주기 위해서는 이 example입니다. 직업도 있습니다. 우리 인생�들을 달라기위해 단 habitat 넘مر talked about basket을 다시 보겠습니다. 여기 약간 같죠. 여기 표이의 cats RPG 갬리 We can Deli space and 법을 젤스타 messenger les kat's 이 الل står punto 배우로 colonial Medicine Ally дв�덕 용스 오빠 소가 방어수 을 이業 러스펄 이야기를 목표 네 보석 ρ TaskHyun expense pancakes 역いただ 이게Like GVA에서 dese�드립니다. 땡스
1. Building LibreOffice Korean Community In 2017, I visited Taiwan’s FLOSS Conference, COSCUP(Conference for Open Source Coders, Users, and Promoters) as the Korean FLOSS Contributor. When I attend COSCUP, I met TDF board member, Italo Vignoli, Taiwan’s TDF members, and Japanese TDF members. When I watched, the TDF board member, Italo vignoli’s presentation, I had a strong impression in his presentation. As a result, I'm building LibreOffice Korean Community. I'm talking about my experience of starting a source code contribution[fix Korean Hangul/Hanja dictionary’s bug] and running a community from 2017 to now(2020). This year, I participate as a contribution mentor for the NIPA's contribution event with NIPA[National IT Industry Promotion Agency, IT Industry promotion organization by the Republic of Korea] cooperation. (The event teaches students and the public how to participate in open source activities.) With the help of NIPA, I will participate as a mentor for LibreOffice Korean language contribution and share how to promote LibreOffice in Korea. 2. CJK's common issues and differences LibreOffice is mainly developed by Western European language speakers. So, the CJK issue of LibreOffice is caused by developers who don't understand CJK languages. East Asian cultural sphere, East Asian Countries uses Ideographs(漢字/汉字, Chinese Characters, Mandarin Chinese: hànzì, Japanese: かんじ kanji, Korean: 한자 hanja). But, Each country and region, Ideographs glyph shape is different in East Asia. Also, some ideographs meaning and sounds are also different. Also, The numeric expressions in Korea, China, and Japan are sometimes similar and different. I will show some common consistency and differences issues between CJK(Korea, China, and Japan) and talk about how to handle and cooperate in LibreOffice. I found some bugs and fixed CJK issues on LibreOffice. I deal with CJK common issues and differences, such as numeric expressions in Korean & Japanese, and don't distinguish both Korean and Japanese fonts from all CJK[Chinese, Japanese and Korean] fonts(such as the Noto CJK font series and Source Han series, etc.), and so on. Because Both Google's Noto CJK font series and Source Han font series is the first open-source "pan CJK(Chinese, Japanese, Korean) typeface, It is not possible to distinguish CJK fonts from the existing Libre office source code. Also, This talk shows the difference of Ideographs in East Asia, such as Mainland China, Taiwan, Hong Kong, Korea, Japan.
10.5446/54650 (DOI)
Synthere rolling, yes rolling, rolling, rolling, that is what I am here to talk about today. Yes, my name is Richard Brown. In fact, I have an entire slide about who I am, but I hope most of you know who I am these days. You know, I've been in open SUSE since it began. I'm a real passionate advocate for rolling releases. I build two of them in the future technology team at SUSE, and I'm here to talk to you really today about, well, my opinions on this topic. This is a very opinionated presentation. I don't normally put disclaimers in front of, in my talks. I hold really strong opinions on this topic. I might offend some of you. I apologize. These are my views, not the views of my employer or like any project or group I've ever been affiliated with. And it's perfectly fine if you disagree with me, even if I'm rather forthright with my views in this session. Yeah, and, you know, we can talk about that afterwards. I, we've got the break at the end, which is great because I've got an awful lot of slides here, so I'm hopefully not going to eat too much into the breaks, and then we can keep on talking afterwards. Anyway, at, at to start at the very beginning, one of the things I've increasingly realized is upstream projects change quickly, you know, even conservative upstream projects change very quickly. You know, the kernel every three months, Kubernetes every three months, SaltStack every six months, you know, nothing is ever staying still. And, you know, our develop, the developers we're working with upstream aren't staying still. And our users don't want to stay still either, because they see this shiny new stuff upstream and they want it too. It gets even worse when you then think about you like, what do upstream like actually support, you know, the standard upstream kernel, the stable release, you know, is lucky if it lasts four months. Because it's basically to the next one comes out. Even the LTS release, you know, is only six or seven years, which is why like Greg Krohab and says these days, you know, even if he's a maintainer of the LTS release, like, you know, use the distro one first, because if you want something longer supported, that's going to be done better than even all of upstream can do. Kubernetes, you know, the incredibly popular dist, you know, new service, you know, it can just about handle a year of updates, like, you know, it was a year of support after a release, you know, and that's only in the latest release I only put in tumbleweed a couple of weeks ago. But before that, it was nine months. And yeah, salt stack, yes, it's one and a half years, which is kind of the longest I could really find like generally, like some things will support themselves for like two or three versions. So when they're being released every six months, one and a half years kind of comes around. But even in the case of salt stack, you know, they're frozen after six months. And like, Ceph is like the longest upstream support thing I could really find that we know we're using heavily in any of the sort of SUSE ecosystem. And you know, that's two years, which is still way shorter than even the shortest open SUSE release, besides the rolling ones, of course. As a project as well, we have a whole bunch more upstreams too. So not only are all these upstreams moving quickly and quicker, you know, we aren't just doing one regular release these days, we're doing leap and jump. Hopefully it'll just be one of them soon. We're doing tumbleweed, we're doing uni, we're doing micro-ress, we're doing cubic, we're doing like 2030 other things I didn't bother mentioning on this slide. Because, you know, all of these projects in open SUSE have a whole bunch of upstreams that they're working with themselves too. And if you look at like cubic as an example, you know, the whole kind of cloud native container ecosystem, the cloud native computing foundation is kind enough to put all of that together in one fancy graph. And oh my God, it's horrific. You know, this is, these are the projects which as the cubic maintainer, I have to worry about, you know, integrating with, co-working with. And every box on that graph that isn't bright white is, well, every box on the graph that is gray is a closed source project too. So it's not something that is like just trivial for me to, you know, throw into OBS and build it myself and test it myself, you know, the term interoperability is, you know, a case of not just working. Oh, I've just lost the slides. Can everybody else still see them? Yeah, fine. For some reason, the slides have disappeared for me, but as long as everyone else can see them, cool. Then, yeah, you know, it's crazy, it's complicated. And in order to interoperate with this, you know, we have to move to keep working with it, especially when like the closed source stuff, you know, we have no way of influencing it. You know, it's an upstream, which we have to work with, but we can't see their code, we can't send full requests, we can't backport anything, and, you know, we're just sort of slaves to their moving. And this is the world we live in, more and more, you know, things have to work even outside of our open source bubble. I mean, you're seeing this on the factory mailing list right now with discussions with Nvidia. Yeah. And, yeah, even the projects we've had for a while, you know, they're getting bigger and bigger and bigger, you know, the kernel is not shrinking anytime soon. And, you know, as this article pointed out, you know, the kernel isn't getting any smaller, but the number of contributors we have to it are, which starts worrying me, you know, do we have enough? Are we sustainable? Are we really doing things the way we are right now? Is this really going to last us for the next 10, 15 years? Especially when you look also at the other projects, like, again, Kubernetes, you know, that's growing both in terms of files and lines of code. And, yeah, at their heart, I know regular releases mean well, so, you know, please don't get too offended about all the nasty stuff I'm about to say, you know, because, you know, at the heart, we're all trying to solve the same problem, you know, you've got thousands of moving parts from thousands of different upstreams. And at the end of the day, as a distribution developer, we want to find some way of putting this in the hands of people. So in a way, they can actually use the darn thing. And, you know, everybody is nervous of change. Developers, you know, the people building the distribution as developers, we're nervous about changing it. And users don't want to, you know, have their systems break, have their systems change. And, you know, you can't break anything if you don't change it. Which is, yeah, weird but true. Oh, wonderful. Yeah, so change is dangerous. You can't break anything if you don't change it. But even regular releases need a heck of a lot of change. And so, you know, most of these regular distributions, you know, we've all kind of asked ourselves, what's the best way of avoiding that? And, you know, we'll just make the smallest amount of changes as possible, because, you know, minimum changes are safer, right? Well, no, because what we end up doing is, you know, taking some of that stuff from upstream and, you know, at a certain point, and then we freeze it. And at the point when it was released, it was, you know, designed and tested by upstream with the whole ecosystem of dependencies that it needed at that time. And then we freeze that one thing. And then we just said, okay, we're not going to touch this for four or five years or six or seven or 15 if it's silly. But then other stuff still needs to change, you know, security updates still need to happen. And then, but we don't want to change, make a too big a change to that package. So we backboard and we just make minor little backboards on top of that one thing. But that minor backboard, you know, wasn't tested with that entire ecosystem of other stuff that made the distribution up. So we end up creating sort of this this lovely sort of Frankenstein's monster of a distribution, where we have to be certain ourselves that everything we have put together is built properly is working properly. And, you know, it was never designed to be done in that way. You know, it's not safer in the pure sense, because it's not really engineered to be done that way. We're just kind of hacking around the fact that we've decided to go slower. And so, you know, inherently regular distributions are Frankenstein distributions. And that that really terrifies me. And also fundamentally, it doesn't even work. You can see this by looking at Slea15, you know, which is, you know, it's an enterprise distribution is one of the most conservative distributions you can get, it's going to be supported for 13 years since its release. Slea15 has been out now for three years. In those three years, they've changed 13,000 packages. And not just like minor little backpours either, that actually includes like over 2,700 actual package version changes in service packs. And the entire code base is less than three and a half thousand packages. So I mean, those numbers don't think like they've replaced the entire code base four times, like a lot of the changes happened in very specific areas. But that's a huge amount of change, which was just kind of proves that this this mentality of, you know, oh, a stable distribution can be done, isn't actually true. And therefore, what we end up doing is actually hacking around it and pretending to ourselves and putting it into our users that, you know, we're stable. When in fact, we're just rolling just kind of rolling badly in line to ourselves. And this isn't including like the 10,000 packages that are in package. Like this is just the Slea pure pure code base. So, you know, if you look at something like jump or leap, you know, those numbers are even bigger, there's even more change that needs to happen there. And I was looking when I was putting to get these slides, I also wanted to kind of think a little bit about the psychology and the the appeal beyond our little bubble of as developers and as open source contributors to the bubble we have right now in open Suzer, because, you know, I think inside Suzer and open Suzer, you know, we typically lean towards the conservative side of things. So, you know, on this graph, it would be just as a kind of model of market adoption and which kind of people adopt new products at what kind of pace and when in a life cycle of a product. And typically speaking, you know, we probably lean to the second half of that bell curve, you know, we have lots of people who are conservative and, you know, a happy being part of the late majority or the laggards to a technology, they're not necessarily that keen to be first. And that's fine if you're one of those, but, you know, fundamentally as an open source project, when you start looking into the sort of typical traits of those people, those aren't the people who are that enthusiastic about technology, they are unlikely to be heavily engaged with that technology, they're unlikely to contribute back to that technology, they're also unlikely to have lots of spare money to invest into that technology or to contribute back financially. And so when I think of like what does open Suzer need to be, you know, as, you know, as we keep on going as we keep on moving forward, you know, I realize, you know, we need to start appealing more to that left-hand side of that bell curve to the getting hold of sort of far earlier adopters, far earlier innovators, get them dragged into the project, move them into the project, encourage them to be part of it, encourage them to contribute back, they're going to be more likely to contribute back, and, you know, potentially encourage them to invest and support and donate and all that other good stuff too. And yeah, so ultimately, you know, slow, regular releases are not a more sustainable way of distributing software, you know, every upstream is getting bigger, we're getting more upstreams, every time we freeze or divert from that upstream, that's more work for us, and we're not getting that much bigger, we're not getting a huge pile of contributors, we're not getting more spare time to work on this stuff. So, you know, we're just risking burning ourselves out every time we do anything in a regular release, as these regular releases get bigger and bigger and bigger. And ultimately, like, first principles of open source, like, the whole premise of open source is, you know, we're meant to be doing all of this stuff bigger as a community, you know, like, Linus's law states, you know, give it enough eyeballs, all bugs are shallow. And yet every single regular release, like, throws almost all of those eyes away. So you're left with a tiny stub of a small subset of contributors who are just working on your regular release and just working on the specific packages in those regular release. And so the whole premise of this open source movement is, you know, left to not actually benefit us. You know, all bugs are suddenly deep because we've packaged a different version from everybody else, and we're using it against libraries that are different from everybody else. And therefore we're on our own, when the whole point of this is meant to be we're working with others, right? And whenever I talk about this stuff, the first thing that everyone throws back in my face is like, okay, fine, like, we get you, Richard, we know where you're coming. So why not like, have a distribution which is like, partially slow or partially rolling, because I want to have something stable, but I want to have some things moving. And I have to point out this this example, because you know, this is where Tom, we started back in 2010, when Greg Rohrman started it, that's exactly what Greg was doing, building a regular base on top of open SUSE releases. And actually, since I first talked about this, I even heard there was like even earlier experiments in Dernaly at SUSE where they tried it too. And it always ended up in the same result. Like, you know, whatever you did with that rolling base, you know, whenever you end up having to overwrite and supersede packages from that stable part, and then you'd have to have some way of like rebasing it or resetting it to zero every release. And the impact on users and the impact on engineering was an absolute nightmare. You know, just building it became an absolute pain, because as the chasm grew between the stable bit and the rolling bit, you constantly had new breakages that no one are going to test it for, like even worse than the stable version, because you had more change because you, yeah. So, you know, even worse than keeping things stable and backporting everything, you had sort of that element plus the fact you were trying to move faster. And then when you did try and fix those issues by like ad hoc tinkering or superseding inside the stable base, then it stopped the stable base being stable. And then, yeah, resetting everything to zero was brutally disruptive to users, because no matter how much we tried, we always ended up with like the regular, the rolling base going in what the rolling part going in one direction, the stable base going in another, and then like everything got mashed together. And, yeah, users suddenly found they had a completely different system from what they were expecting every eight months. Powerful to this, though, we were trying to find ways of making tumbleweed more stable, sorry, factory more stable. And, you know, factory in the in the build service, like we've always built it, building everything together on codebase, rebuilding the entire dependency tree as stuff's added. And with, you know, at the time, we were then also adding open QA and making open QA a key part of the release system. So, you know, testing it the way users want to use it, leveraging open QA, LDP and all this other stuff and only shipping when it's all green or green enough. And then, you know, well, then, you know, that process has worked. We've proven it to be a very sustainable and reliable way of doing things, you know, tumbleweed is 60 this tumbleweed is now six years old. It's still sustainable. It still works for its target audience. More about that later. The other thing that people throw at me whenever I talk about this stuff is, okay, fine, you know, you could have a rolling base system and then just, you know, containers will fix everything anyway, Richard. So why are you even worrying about this operating system stuff anymore? And so, yeah, started with app image, which, you know, is kind of a fun example, because the desktop examples that you always get like graphics, you know, people are more likely in this group to be hands on with it. And, you know, it promises to be a portable format where, you know, your Linux app can run anywhere. And there's plenty of upstreams using it. Yes, I know there was a LibreOffice talk about app image here today. There's one problem, like, promise to run everywhere, apart from it doesn't run everywhere and they even document that it doesn't run everywhere, because you have to cram all the system dependencies for every distribution you possibly could want to run it on into the app image. So if you don't want to make an app image that is a couple of terabytes in size, exaggerating slightly for effect, but you get the idea, you know, then it's going to be a subset of distributions it works on. There's going to be a subset of distributions it doesn't work on. And this isn't just me, you know, bashing on app image, because I really don't like it. You know, this is true with all containers everywhere. You know, even my beloved baby cubic, where I'm running Kubernetes, you know, you have a situation there, which I won't go into detail too much, because I could probably talk about that for half an hour on its own. But, you know, the containers running on a host still have dependencies from a host, you know, they still have to have the right container one time, they still have to have the right cubelet in this case. So when you need to upgrade your containers, there's times where you need to make sure the containers get updated first before the software on the host operating system does, otherwise they stop talking to each other. And there's sometimes the inverse too, where you have to make sure the base system is updated before the containers are, you know, but Zippa doesn't know about that. No package manager knows about that. That becomes, you know, a fun complicated challenge. So basically at its heart, this idea that containers are like totally distribution neutral, and you can run any container on any machine, and it's wonderfully isolated is a myth, you know, there are some cases where it's true. But you still, if you're doing containers properly, you still need to at least think about it the same way you think about a traditional distribution, you know, build everything properly, test everything properly, release it all aligned together. And doing that with traditional RPMs is what we've been doing in cubic and we found it just works really, really smoothly. Part of that, like you say, is just containers can be like really unfair and like require stuff from the host, which might not exist on your system if you're not careful. So yeah, that used to be taken into account. If you take it into account, you actually end up with a weird situation, where because containers do try and isolate themselves from the host, and because you're testing everything and because you're building it all consistently, you kind of know where those fracture points are going to be things like when there's a new G libc library popped in and therefore, you know, all your containers are building differently than they used to. So once you're aware of those kind of fracture points, or like in the case of cubic actually, Kubernetes versions do this every time. Once you know where those risks are, you can actually be more liberal elsewhere. So you can have a situation where you do have the base system moving at a different pace from the containers, but it's only going to work with specific containers, it's only going to work with specific containers at specific times, you can't treat everything equal. So when it comes to rolling releases, this is something that I've been talking about for a while. Well, there is this, well, I now consider a kind of fundamental axiom with rolling distributions. If you want to be able to move any part of a complicated system like a distribution, you need to have a process in place where you can change everything. And this is where like OBS helps, this is where OpenQA helps, this is where our release process in tumbleweed helps, you know, where, you know, the process and the tooling is there now where we can literally have someone stroll in off the street tomorrow and want to change the entire distribution and we say, yeah, go ahead, we can trust, we can try that. If you don't have that, this idea is going to fall on its face initially. So being able, you really need to make sure that you are open to the possibility that everything changes, but that doesn't necessarily mean you have to change everything all the time, all at once. And there's real benefits from doing it this way. The more rolling, when you're rolling, the closer you are to upstream, the better it is for everybody. It's easier for distribution builders because we can benefit from what everybody else is doing upstream. We have an easier time talking to those upstreams. We have an easier time contributing back to them. That also means we have an easier time for our users too. You know, our users are going to have more information that is accurate about the current version of stuff that's running. When they need help, there's more people that can help them. And for no, it reduces a whole bunch of double work, retesting, or this never-ending desk spiral of backports that require backports, that require backports. And then people wonder why it takes so long sometimes to release a patch in something like Leap. But change is still scary. And not everybody wants to go at the speed of tumbleweed, as fast as everybody else does. And not every upstream is necessarily aligned, even with the stuff they're using themselves. I really wish that was true, where every distribution took care and every upstream took care. And when they're dependent on something, they talk to each other and make sure they all release things reasonably aligned. But it doesn't happen. And we don't live in a perfect world. And that gives us therefore a little bit of work to worry about or to take care. And yeah, I think I've already mentioned this, but speed is an element of rolling releases, but it's not necessarily the defining part of it. Full speed is not the only speed. Right now, with tumbleweed, we've proven we can go as fast as upstreams. We've proven we can go as fast as contributions. I think rolling releases are the answer at any speed where our users want to be. I don't think regular releases are the right way of doing software in 2020. Full stop the end. And so if tumbleweed is too fast for you, fine. Then let's look at our answers where we find a better balance that takes everything we know from the process and everything we know from the ability to move quickly and slow it down at a pace which doesn't scare too many users away, doesn't let us drift too far from upstreams. Maybe there is this lovely Goldilocks point that no one's found yet of a rolling release that's just fast enough. I'm really keen in exploring that idea. In some ways, I already am kind of exploring that idea. With micro-S, which I was talking about in my earlier session, we already have a distribution where the amount of change that happens to micro-S is less than the rest of tumbleweed. So even though it's built on tumbleweed, micro-S is a smaller distribution. It's just there to run one thing. In this case, I'm going to say it's just there to run containers. And if it's just running containers, there isn't that much to change. In a kernel, podman, it's kind of it. That's all it does. So you don't get quite so many updates. You don't have quite so much risk. The risk gets mitigated by the fact that it's immutable anyway. So while it's running, it's not going to change. When it reboots, well, when it reboots, you know exactly what services are running on there, just podman. So it's trivial for micro-S itself to figure out, are my podman containers still running? And if they're not, then automatically fix itself and roll this self back. So you can keep the code base running at full speed, but actually ship something that's so much smaller that the fact that the code base is going really, really quickly doesn't really matter. Because the only parts that the user is exposed to is this relatively small couple of hundred packages. And, you know, well, if they're tested, well, in the touch word, I do a pretty good job of testing them. It always works. It's just as stable as bleep or something even more conservative. And in my case, that's why I actually started doing that wanted to do this presentation. You know, I don't use any regular releases in my distribution now. My next cloud, my emulation station, everything is now, you know, my Minecraft server, everything is now running on bleep. Someone just asked a question, automated updates can be dangerous on changes of major versions of a package. Any option to pin a package to a major version would be really helpful. Any ideas, plans about that? Well, you can theoretically pin something, I would argue that it's the wrong way to think about it. You know, in the case of micro s, have the update happen and have your health checker run, you know, if health, if health checker says it's running fine, then it's running fine. If health checker says it's not running fine, it's going to roll itself back and pin itself. So manually interacting with the package manager to figure out what version is running where, like that's, that's not something you should be worrying about on the basis. Now, that might be something you want to worry about on the service you're running, like for my next cloud. Yeah, sure, I pin my next cloud to the stable stream because the the beat of stream is horrifically dangerous. But that's something that you do in containers. That's just deciding which container pool is that's that's not anything to do with the operating system. That's, you know, that's now user space and with this concept, not, not something I have to worry about as a distribution engineer, you can run whichever version of an x cloud you want. And that kind of actually leads me nicely to this, this epiphany that I had when I gave a version of this talk last week. So unfortunately, I know which we're at half past I'm going to go on for about five, 10 minutes. I'm afraid. Sorry. That's, you know, is everybody doing everything wrong? You know, we know that rpm's are great for building, you know, we've been doing it for years, OBS is great, all is good. But it can be painful for users. Why do we still make users deal with packages? You know, why, you know, containers are a real thing. People, more people know how to use containers, you know, back to that ridiculous graph I showed earlier, you know, there's plenty of projects out there that only do containers. When you look at that ridiculous graph, you realize there's all these projects that are container first. But there isn't actually a container first operating system out there, like not one that's really thought, what should we look like if we still wanted to be interactive? We still wanted to have users work with it like they work with a, you know, open suzo or other server distribution. But, you know, didn't have rpm's, didn't have packages, you know, didn't have a package manager that you interacted with in the same way. You just use containers and only containers. Because, yeah, again, contentious point maybe, but I don't think developers of society really want to care about packages. You know, they just want either their service that they want to run, or if they're a developer or DevOps engineer, you know, maybe the languages or the libraries they care about for the thing they want to build against. Well, containers are really a really easy way to actually sort of translate all of the stuff we've been doing for the last 20 or years and often uses our software in a new format, which is actually better aligned with that way of thinking, you know, just give people the services they want, just give people the languages and libraries they want, and don't have them faffing about with packages and, yeah, worrying about all that stuff. So this is a crazy concept. So a crazy concept for me, normally end up with a bad name attached to it, and then the bad name has a habit of sticking. Maybe this one will hopefully not. I'm caught in this idea, cool. You know, what about if we had an operating layer, you know, a container only system that just did nothing but containers. Basically, something like micro-OS, but actually having like a whole ecosystem of containers ready to use that were built together, that were tested together, because, you know, I do accept the fact the outside wide world of containers isn't necessarily the best way you want to be encouraging people to consume all of that stuff. Some of those containers out there have been broken. Yeah, and like just like Brett is saying in the chat right now, you know, containers are currently problematic, could they not curate it like system packages are. He's right. This is where this idea comes in. You know, what if we had a curated collection of containers really handled for what people want to use these days? Like, I've called this concept run times because I like flat packs. I've copied the idea from them basically. You know, why not have a bunch of runtime containers that contain the language libraries, the tool chain all bundled together. So, you know, you'd have a Python runtime, you'd have a Golang runtime. And then after that, you know, you'd have apps, you know, basic proper service containers that are ready to go with the actual thing that, you know, sysadmin's whatever, Apache, MariaDB, whatever, the XEP, etc. You know, running the apps would be a simple, you know, podman pool, you know, some nice simple short name running from registry.openSuzo.org. And, you know, building your own service based on an existing one, be a simple case of like using builder or, you know, Docker with a Docker file, and just, you know, pulling that from that same service. Same would go for the run times, you know, it would be an incredibly easy, simple platform for anybody doing anything containers to just, you know, pull their runtime and they'll get Python 3 and they won't necessarily, they won't need to care about exactly which, you know, unless they want to then which case they can change the version number at the end. But they shouldn't need to care about exactly which version of Python that they're working on. You know, they should just be able to pull it down, have everything they need in there, have the Python command line, have all the libraries ready to go, and then just base their container alongside it so they can, you know, have their builder from, I don't know just build a pool there, but I meant builder from, Golang, and build the container alongside it, chopped up. So this is how users would see it, you know, nice, simple, very container, cloud native friendly, but actually behind the scenes, I think I've got a good idea how we could build this whole thing in OBS like relatively quickly, using OBS for what it's really good for and actually having a whole bunch of sub projects. So, you know, you'd have a master project, I'm calling it cool in this example here, have these runtime sub projects, have sub projects for things like Python that's specific versions, and then have the packages dependent on those also built separately there. So, you'd be building RPMs very similar to how we do current RPM builds. The difference is you'd have an interesting nesting of sub projects, but everything would still be built together. We're not talking about like freezing these sub projects artificially like we do with regular releases, you know, so, you know, every new thing gets into the base system, everything can rebuild, get a whole bunch of new containers, you know, so you're still honoring the kind of built together part, the testing together, again, same kind of thing, the shipping together, again, same kind of thing we're doing with tumbleweed, but there will be times when things need to diverge, there'll be times when we do want to pin a Python runtime to something, or there'll be a time when something might break a little bit. And this way we've actually been in a position to allow that to happen, you know, we could actually release the other runtimes that are fine, we could release the base system that are fine, and we can leave the containers there until we get around to fixing it, or until the support lifecycle is down, so, you know, we don't always necessarily need to move everything at warp speed, we could speed things up or slow things down, unless, you know, as we need to. And, you know, this might be built in a really complicated way in OBS, but users would just see this in like the examples I gave earlier, a nice simple layer in registry.openSuser.org, we flatten it all down and keep it simple, which is kind of what we're doing already when you look at the micro-OS and other containers that we have in the openSuser namespace, you know, they're built in a multitude of different ways, and yet they just appear as like openSuser tumbleweed or openSuser micro-OS, so that's what opens user busybox. So, yeah, keeps it simple. We don't want things to be complicated for people to use. Somebody asked what about the desktop, which is really cool, because I already put a slide in for that. I already did a talk about the micro-OS desktop. This is a vet, the cool idea is a very server-oriented idea. I think for the desktop side of things, the micro-OS desktop is kind of already on track for that. So, and I already, that's what that wants to be, the rolling release that I use. So, please, my video for my last session is already on YouTube, you can watch that talk already. Or you can go to this talk tomorrow, where Davio is actually talking about like how he is actually using the micro-OS desktop as his daily driver. So, you could already argue the desktop side of this equation is already wet on the way to being fixed, and this cool idea is, yeah, figuring out how to give a similar kind of curated solution for the server container side of things. Now, I've run eight minutes over. There's been a whole bunch of questions in the chat. I will try my best to snipe a few out before I stop. I addressed Brett's thing. I agree with that. Answered Petro's thing. Sorry, sorry, sorry. Axel, yeah, you have to worry about the software that runs on the system, not in the container. That's the point of this. Let's use that for all its worth, so we can worry about less stuff. That's part of my goal here as well. Not only moving stuff quicker and being more aligned with upstream, but also cutting down on the amount of stuff we have to look after. If upstreams are taking care of stuff well, then there might be a case of no need to curate them and put them into cool. But if they're not doing it well, if the curation is needed, then let's do it properly rather than just putting things all over the place. Upstream can never be trusted. Yeah, this time, upstream can do a better job than we can. We can't always be trusted either. Yeah, that's it. Any other questions for the chat or voice before we call it today? Because I don't want to take any more of your break. No, cool. If anybody really likes this idea, please ping me in, well, IRC or chat or email or factory or whatever. You're likely to see this is my Hack Week project next time Suza have a Hack Week. Because I think I could even start bootstrapping this stuff alongside Tumbleweed. I want to see how far this idea goes. But yeah, if other people like it too, let's go. Cool. Yeah, pun intended. Okay. Thank you, everybody. Bye-bye.
Linux distribution projects have for decades worked days, nights, weekends to carefully download, compile, and maintain thousands of software packages. And they often do this in carefully curated distributions which release once every few years, and then gather endless amounts of happy users while that version is supported for half a decade or more. This talk will cover precisely why this model we've been doing for so wrong is fundamentally flawed, puts dangerous strain on the communities and the companies doing the work, and fail to deliver what users actually want, often misleading those users into a false sense of security. Richard will then discuss how Rolling Releases are a naturally healthier, self-sustaining model for distributing complex software stacks like Linux, and how the approach better delivers the promises and benefits expected by users from open source software. Finally the session will give examples of how with Tumbleweed and MicroOS, openSUSE already provides everything anyone needs to leverage the benefits of a rolling life and escape the false comfort provided by traditional regular release software.
10.5446/54651 (DOI)
Hello everyone and welcome to the talk about RISmointering.org. My name is Michal Koneczny and I'm a maintainer of RISmointering.org for Fedora. This is recording because I'm on the road right now and I'm not sure if I should be available in the chat, but I'm not available to present. Okay, so let's start. Here is the agenda. This is an illustration of the RISmointering.org and how I imagine it when I'm writing my blog posts. You can see it's a very nice word and the new hotness is actually something above floating above the new hotness, above Anitya. Okay, so let's start with some basics. What is RISmointering.org? It's containing from two applications. The first one is Anitya. It provides web interface for the users to actually do the things they need. The users could add new projects, watch for the releases. It automatically checks for new releases of the project and it's sending Fedora message when a new version is retrieved. The new hotness is Fedora messaging consumer. For those of you not aware, Fedora messaging is a messaging bus we are using in Fedora as basically for every message that is sent in infrastructure. The new hotness is listening to messages emitted by Anitya. It is only interested in the ones that are about the new release font and there are others. It's great or updates Paxilla which is your issue tracker system when new release is found. So the package who maintains the project, the package in Fedora is notified about new version. It could start scratch builds in your build system if this is configured for the project. Let's start with Anitya. I will talk about it a bit more because it's a really interesting project. Let's start with some magic numbers. These numbers are actually kind of impressive. The first commit for the Anitya was on 28th November 2013. First release was 0.1.0 which was on 29th September 2014. So as you can see the first release was almost a year later after the project was created. I'm the maintainer of the Anitya for two years so I don't have too much knowledge about the history of Anitya but I at least know what it is. Let's see at some contributions so you can see that the project is still alive. There are almost 2,000 commits. We have slightly above 50 contributors. There were almost half thousand issues created and we closed plenty of them. As you can see from this we still have some open and we are trying to work to close them as soon as possible. Current version of the Anitya is 0.18.0. For just to make it more interesting. The number of projects that are currently watched by Riesmoenturing.org are 126,435. I think this number will be much higher because there are new projects added each day. Just for this talk I have here the number of packages that are mapped to opensews distribution and we have 331 packages up to opensews. It is primarily used by Fedora so there are plenty more for Fedora. Here you can see the diagram which represents how the Anitya is working. As you can see there are two kind of users that could interact with it. One is the regular user which could add new project, add new distribution to the Anitya, edit project, flag project, flag is a mechanism that is used to mark the project that there is some issue. Admin usually looks at it and investigates if this is really an issue and solves it and closes it. He could add new mapping, mappings are the mappings between the project and the distribution and the name of the package in the distribution. The admin could do some advanced roles. He could delete projects, delete mappings, delete distribution, he could edit distribution, he could close flags that are solved. He could also ban regular users and create admins from the regular users if needed. So everybody who works with Anitya could become an admin, just say if you want and I could make it happen. The Anitya has two other parts, there are parts of it but they are running as separate application. The version checker which is being run regularly and it's checking every project that is there. If there is new version. There are some rules for the checking, it's not checking every project every run, there are exceptions. The main exception is the GitHub because GitHub is limited by the number of requests you could do in one hour. So in this case the new version is actually limited sometime but most of the projects, if there is new version, we will know about it in a matter of hours. There is also libraries.io consumer. The libraries.io consumer is just a listener for the libraries.io messages. And when it fonts new project that is interesting, is actually hosted on the platform we are interested in, then it reports it to Anitya and creates a new project if this is configured or reports a new version. If the new version is reported, the Anitya is sending the message to federal messaging broker. The Anitya is sending message in almost every change that will happen in the application. So if you want to see the history, you could use our data wrapper application which is actually only wrapper for the messages in the message broker. And you could see what exactly was sent and what changed in the Anitya. Probably there should be a full history of Anitya. So the current situation. Current situation in Anitya is that we are working on the milestone 1.0 because Anitya is major enough. I just stayed with the previous versioning system. Here is a link to the actual milestone. You could see what needs to be done, what is done. Most of the things for the 1.0 are done. I just want to do a few more features and update the documentation to better state because there are plenty of things that are outdated. Because we have October 1st this month, Anitya is part of it. So if you want to help on the Anitya, there are issues that are marked, that are labeled October 1st. So you could do it. It's a nice way to start contributing to it. Next one is the new hotness. The new hotness is floating island in the realm of magic. It's actually the point between Anitya and Fedora. There is monitoring on the Fedora island. Here are some statistics. The first comment was 13 March 2014. First release was 0.1.2, 17 November 2014. I'm not sure why the first release was in minor version because I'm just in the not minor version. I didn't find any attack that will be older than this. So it's possible that there was some areas just not attacked before. There are 140 commits from 22 different distributors. We have almost 150 issues created and we closed most of them. Current version is 0.13.1. I'm currently working as same as Anitya on 0.1.2. But there are plenty of work. This will be released. Maybe in the meantime I will release some bug fix release. The new hotness is actually much more complex. When you take in consideration the number of external applications it needs to communicate with. It's easier as a script. It's much less code based than Anitya. But it communicates with plenty of other projects. It starts, the journey starts with Fedora messaging broker. It actually sends a message that the new hotness wants to consume. In the case that this is the new version, it contacts MDAPI, it contacts PDC, or MDAPI is the metadata API which lets us check if the project is still active, if we should report this. PDC just looks if it's retired or what version is actually there. Then we contact the Pagur. We are hosting our packages sources there. We look at what's on the notification settings. It could be set on the Pagur page. If this all is correct, we have a new version. The project is still alive. We should report it. We contact Baxilla and create a ticket or update the ticket that was created before. In case the user wants to have even the scratch build, we will prepare it and send the Koji information that we want to build it. And then wait on the actual message from the Koji that everything went well. If this is a new mapping for the project in Anitya, we actually contact Anitya and just send the API request to do a check for the project. We need to check if there is a new version because the mapping was changed. The current situation is that Milestone wine20 is currently being pored on. The same as for Anitya, there are much more to be done in the case of new hotness because there are, for most of the things, I need the staging environment for testing because I need to communicate with plenty of external services. I need some testing environment. Right now we finished the Data Center move of Fedora, which was a really big project. And we are setting up the staging environment to have something to work on. So right now I'm blocked on the new hotness work. The Anitya, because it doesn't relies on anything external, it could be work on without issue. Same as Anitya, the new hotness is part of the HectorBerfest event. You could do some small changes. Still you can work on it. It's just that you can't test it with actual systems to connect to, but you could work on it and write a test. The new hotness and Anitya, it both are almost 100% test coverage, so it's nice to work with them. Okay, so how is this integrated to package workflow? This is package workflow for Fedora. It is not integrated to OpenCS package workflow. This is where the magic actually happens and this is how it looks. We have a package. Package requests are repository for new package. This is created in Pagur. He creates a project in Anitya and add new mapping. I would be glad to have this automated in the future, but right now it needs to be done manually by package. There is a script that between Pagur and Buxilla creates the component. It synchronizes the Pagur with Buxilla. It creates the component in the Buxilla, so you could actually have something to report to. Anitya, if it phones a new version, it reports it to the new hotness and new hotness is creating a ticket in Buxilla. In the L case, the package always has up-to-date information about new versions. There are exceptions like projects that are changing the version system, that are hard to guess which version is actually latest. In Terris change I am working on, that will allow Anitya to report not only latest, but everything it founds and the new hotness could actually notify the users. There is a new version that is not latest, but we found a new version, so if you are interested you could work on it. Let's go to the demo. Here is Anitya, this is how it looks. I am actually logged in, you can log in using the federal account or Terris option to use openID or I am not sure which option is the tiered one. I am using the federal account as you can see. As you can see there are more projects than I actually announced, because the number is growing. You can look at the docs, if you want a quick, this is the docs for the actual version, that is deployed. You can look at the projects here. The projects are just a list of the monitored projects. I can look at some of it. Here you can see the latest version that is reported. You can see the version list on the right side. You can see the status of the project. There was no new version found, but this is okay, we just didn't found anything new. You can see the homepage of the project. You can see the backend. The backend I will talk about this more later. The ecosystem is only visible for the admins. This is not something the user should be aware of, it's just for our own info. There is a default version scheme for every project, it's RPM, but if you allow to use Semantic and Calendar. Version check is done. The delete project is only for the admins. Below you can see the mappings. As a simple user you could edit or add new mappings. You can see that this project 0AD has mappings for Fedora, actually 3 mappings I see. 2 for Ubuntu, 2 for Debian, 2 for Magaya, 2 for Arch Linux and 2 for OpenMandriva. Here at the top you can see the flag button which is actually used to report anything. If you flag it you could just do the right array and submit it. As an admin you could look at all the flags that are reported on the Flux page. Currently we have only a few of them, so it's nice, it looks like most of them are solved. Here you can see the list of distribution. Distributions could be added by every user. We just changed the input method because before you could actually add a new distribution through the text field. We have plenty of typos, but we changed it to actual form. If the distribution is already there you could just choose it from the drop-down. The last thing you could do is to add the project. Here you can see what could be added. There is project name, homepage, backend, version scheme. Which could be chosen from 3 options. We have plenty of backend supported. Backend is actually the hosting service. You could look at it and see plenty of them are supported. If it isn't supported you could always use custom, but the custom is somewhat harder to actually set. Then you do a version pattern, which is only for the calendar versioning. The semantic just used the owner project. This one is changed based on the backend. The version prefix is used just to remove anything before the version. If the project is using for instance for example v version, then you could actually add the v to this field. And it will be removed from the sorting mechanism. You could add some mapping from the start. Here you can see the drop-down that you could choose the distribution from. And there you could write the name of the package. The check latest release on submit is when you submit it, it checks for the latest release. This is not needed, but most of the projects are checked until one hour. So it's not needed to do it this way. This will be changed in the future because I want to allow the user to actually test their settings. If the project could be found and everything before submitting it and adding it to the new project. Did I miss something? I don't think so. The last thing you could do is you can search for any project to the search field. It will just show you the projects that are found. At the bottom of the page you could see where the last check was done. And how many projects were checked. How many was okay. How many had some error. How many hit to rate limit. As you can see in the last check there was no rate limit hit. Which means that we didn't had any issue with the GitHub rate limiting. Okay, let's go back to the slides. Okay, here we have references. You can look at the release monitoring blog post. I'm trying to do our regular updates. It's written from a magic word. This is why I had to visit our heads on. There is URL for the release monitoring org. And the repositories URL. So you can look at the Aninja and the new hotness. If you are interested you can help. We are always happy to see any new contributor. So thank you for attending this. And hopefully I'm available in chat to answer any questions.
Release-monitoring org is an open source application that let you add any project and checks for new versions of this project. It is used as part of the packager workflow in Fedora. This talk will show you the basic features of this app and how it is integrated in Fedora packager workflow. There will also be some demo of the application itself.
10.5446/54653 (DOI)
Okay, so now time. Hello, so everyone. So today, second day, so now starting. My name is Shinjo Enoki. So this talk is a challenge of growing the liberal Japanese community through this event under COVID-19. So this talk is also two topics. One topic is event activity in Japan for the past 10 years. And second, so current event activity status in Japan. Okay. And so my name is Shinjo Enoki. So from Japan. Yeah. And liberal Japanese community and other communities. My activity. And so my living town is Nishinomiya in Japan. And so across Osaka. And so I started the event activity in Japan for the last 10 years. 2010, so liberal Japanese started. But so start time, so no J committee member. Not including the founding member. But so soon many people join. Some people are contributors to liberal Japanese. And other open source community contributors. And so other. And so. So we had four events. So open source conference event. So three events and the library study meetup. So. And this is also open office. Meetup. Moving to the liberal office meetup. So. Japanese team founded. So Japanese team is part of the J community. And so responsible for tasks support and so relationship. Without the community. For example, so. Global community relationship. Or so. Marketing so contact relationship. So. So. 2011. Events. So. Very old time. And so event focus. So. And so. So. So relationship with other open source communities. And we joined the other open source events. So. So. So. So. So. So. So. So. So. So. So. So. So. So again. Decent route. This is. So. vorher. And. Mark. And so. So. So. So. So. Miniimbus. So. The mobile exists, OK. sir. rail. I see. So. Very. or another event. So, Inklings saw Librex users and the knowledge sharing. This is a study meter and Librex kind. Sorry, this table is old. And a trend, please checking the trend. So, start time is a so study party, but so, and offline meeting, so Inklings and HACQuest, Inklings, but so after 2017, 2018, slow down the HACQuest and OSS event is a so trend is the same, almost same. And Librex day is a so trend is the same. And so, MiniconFu is a so annual conference. So, Librex KaiGi and MiniconFu is almost the same. But so, MiniconFu is including international. And so, Librex KaiGi is only Japanese event. Okay, for example, so, KaiSai Librex party, study meter. So, this is a half day events. So, usually so seminar story style. And so, Librex day is a so co-working space together working the very small meetup. So, this is every month. Organize, so maybe so 70 or 80 or 70, 70. Now progress every month. And HACQuest, sometimes so we organize the HACQuest. This HACQuest is a MiniconFu after event. And so, Librex KaiGi is every year annual conference. This is also 2016 events. And another so strength so relationship is East Asia. And Taiwan, Franklin or Mark or many Taiwanese people. So, relationship and South Korea and Deshion. And Indonesia, so many Indonesian people. So, 2016 so keynote, Librex KaiGi keynote is invite, Franklin. And so, Librex Minicon, so 2017 with open source, open through Asia summit. So, together organize. After so open through Asia summit, so every year we joins. And Koskap is the Taiwanese biggest open source event. So, 2019 so Japanese committee members join. And other events, so sometimes so join the event. And last year, we organize the first Asia conference. So, maybe 80 people coming. 20 people so all in country and 60 people in Japanese. And two day conference. And please checking information so my slide, 2016 slide. And second topic, current event activity status in Japan. Now, so focus on online events, mostly hapist. Every weekend, we organize and other events. So, of course, so now difficult situation in this year. So, in Japan or other countries so very difficult so offline activity. So, yeah, most contribution is can be online, but so it is also important to meet face to face. It helps build trust, makes communication easier and so interaction is fun. Now, so Japanese situation and hundreds of COVID-19 so daily, founded day. Maybe so that maybe less than in many countries, however, offline activity is not easy. So, this is a trend. And positive side, yeah, so tool is so very nice. Acceleration of open source online tool development, for example, GT Meet. And our skills on online tool are improving. Online communication has become widespread in society. So many people are now accepting online meeting. So, what kind of online events? So, mostly hapist every week. And another is open source conference online and 10th anniversary events. And so, open through the library conference so prepare a party or other. And why hapist? So, because easy to hold. Yeah, and so it has the effect of accelerating the committee. Of course, outreach has a little limited effect. Why it started? So, document freedom day online in March was successful. With about 15 people, 50 participants. This is our first online event. And start it to start from the next month with the moment. And how to online hapist. So, different from online offline hapist. Because so, online conference system, so, depends on system. Discuss is one topic. And work together one task. For example, so write a bug report while the sharing screen and discussing. Another case, discuss user interface and help translations. And so, everyone, another case is everyone checking the reproduction of bug in on environment. So, because so easy to understand how to reproduce. And so, another case is lecture to read the source code. Yeah. So, this style, I think this is the move work. And the move work is so, applying the programming to non programming tasks. So, in programming to people, work together, what programming so team members work together on the same computer. So, definition of other area. Please check in the website. And so, we have a new program. And so, effect of hapist. Input actually, and the speed of discussion. It's clear what's happening with screen sharing. And training. So, easy to understand how to do it. We are not focused on training, but we will explain if someone does not understand. Maybe it's important point. Okay. And so, use it to commit on the YouTube live. So, simply spaced. So, very easy to weigh. No special preparation. And there are few trouble. So, sometimes it get in trouble, but so not critical. So, for example, so live streaming is suddenly disconnected. And sometimes, so doesn't make a sound. And the live stream is so psychological. So, hardware is low. Maybe it's a good point. And some people seems to be watching it like listening to the radio. And some people are watching later. The number of view is small. But that's not important. And this is Japanese team channel. So, every event, so live streaming and archive. And regularly is important. So, make a habit by folding it every week. So, very easy to for the organizers and so, particularly, space. And so, easy to remember that we have learned. And so, no preparation for HACFEST. Only the creator event page. And set up YouTube live. And that's all. Quality of the content. And so, quality of the content. And so, quality of the content. So, quality of the content depends on the participants. My lucky point is that participants have a lot of topics. And another event activities. So, online meetup with Chichimi. So, document free day or this party or. 10th anniversary event. And so, open source conference. Many open source committee get together. And so, it is holding various cities of Japan. But this is, it will be hold online. And so, seminar talk with Zoom. And so, other people join the Zoom or YouTube live. So, this screenshot is also, Raru-san, art team. She is also writer. And so, she is also a writer. And so, she is also a writer. And so, this case is also how to writing a novel in writer. And so, VOF is a meetup style for the discussion. And benefit and challenge of online events. Benefits, so, is also no restriction on location. So, easier join and easy to hold. And so, easy make effective work. And challenges. So, outreach is not successful now. And face-to-face is better for dealing with communication programs. And party is more fun offline. And so, now, so stressed by COVID-19. So, we have a lot of people who are interested in online events. So, now, so stressed by COVID-19. So, committee members also stressful. And so, more communication troubles than usual. So, the committee have not been able to handle it well. I don't know the solution. Maybe, so I think important that so, considerate and so more respectful. And now, communication channel set. So, join meeting is not active now. Moving to telegram group. So, very easy to response. But so, discussion is difficult. And we started using TDF Red Mine. And track, task and discussion. So, conclusion. So, over the last 10 years, we have had many events. Different type of events for different type, different purpose. This year, focus on online events. Focus, especially on haggles. It helps accelerate the community. Outreach and communication issue need to be addressed. And finish. Thank you for listening. Thank you. So, any question? Thank you. Okay, so, it's finished.
Every year, our LibreOffice community in Japan hosts many offline events. However, It became difficult due to the influence of the COVID-19 this year. So, we moved from offline to online events, like other LibreOffice communities and other OSS communities. We are organizing to LibreOffice Hackfest every Wednesday night and sometimes joined other online events such as OpenSource conferences. Our previous offline events had three goals: to raise the profile of LibreOffice, to strengthen relationships with other open source communities, and to grow the LibreOffice Japanese community. Now that we are online, our activities focused mainly on the growth of the LibreOffice community. I will share the specifics of what we've done in previous offline events and what we're doing in this online event. And I would like to talk about the knowledge we gained, the challenges we faced, and the next challenges.
10.5446/54654 (DOI)
Can you all hear me? Yes. Where's Sosiek? He's the one with the slide deck. He says he's here. All right, LCP. Hello. Yeah. All right, at least we can hear you. Yeah, that's. Have that, chetl. That's useful. Wait, I have to start this and that. All right. All right. All right. Well, all right. Here we go. We kind of did it. Yeah. So. Yeah, that's. Who are we? We are the heroes. And that means we are a structured team at open Susan, which is a rarity. I think that's the hero, the heroics in itself. Yes. Maintainers of the majority of open to the infrastructure, which means. That we may not maintain the most important parts of the infrastructure, but we will maintain the vast majority of it. And. That means services that that fall outside of the jurisdiction of Suza itself. We don't maintain. Build service because that's that's what Suza does. And we don't maintain. Something else and I'm forgetting what now that doesn't matter. We maintain. Most of the things that fall under open to the infrastructure umbrella. We don't maintain the build service and we do not maintain the Suza bugzilla. Oh yeah, that I forgot about the bugzilla. Yeah. So, this is a talk about the future of the infrastructure, but we should start probably from the beginning. So from the. What, what we did, what we did this year. And this was quite kind of an eventful year for us because a bunch of things were migrated from. From micro focus because of that whole thing where micro focus sold Suza to somebody else. Stasiak, are you moving the slides. I am now. So this year interview. We migrate news planters search to jackal, which was great thing because that allowed us to. To develop it all on GitHub, which is a great thing because that means that we can collaborate on things much easier and don't have to. And we can rely on. We moved to SP from connect, which is a great thing because connect kind of. Kind of is on its way out and for those of you who don't know connect it's the. Main part of the infrastructure we use for knowing who is who in the project outside of open service. And we use it for. For gaining. Memberships for for the contributors. And also some management related to that. Then we also had that move from MX. From the from the mail mail machines that were set up by Suza, we moved those to open Suza. That is kind of important because that allows us to have a working postmaster address, which is a great thing, I guess. And. And also forums for moves to the heroes network. And. That's cool because. We really didn't have any insight into those forums before we actually migrated. So. So, also, the thing that happened. Recently was Suza community accounts were created by Suza because of, you know. Micro Focus no longer wanted to maintain our accounts. Maybe they wanted but but it was better for us to actually move over to. Commit to Suza accounts because because that means that we can actually have something that is kind of maintained close by not necessarily by us, but by by people who we have a better contact with. Beyond that. Beyond that. We had a whole bunch of things deployed which. Well. It's, it's, it's a great thing to, to have some, some more software that we can use as the project because, you know, the heroes have to. Support the community of developers that that is. That is the open source community. And we do that by, by having that by maintaining a whole bunch of those things. Among those were just a meet which is useful for, for, you know, meetings, like, like this. A model which which I actually don't know what we are hosting there right now because I know I know there are some. I think. I believe the moodle is brand new. Yeah, the model is brand new. Yeah, it has some it has some courses about some things I don't know exactly. The idea is that there are some members of the open to the community who are interested in using moodle to help fill out a portfolio of tutorials and other kinds of things to help people like on board into the community, and to also be able to put it have a place for technical learning material for open Suza people to host and put in there like, you know, learning how to do some, how to deploy some service in open Suza or how to do using cubic or things like that. So I think that's where moodle is going to come in. Although, again, it is, I think we just literally got it a week or so ago so I don't, I don't know what we're going to how it's going to go. I think it was more than that but but maybe I don't know. It was announced last week so. Also not sure about that. I know it was mentioned in some email chain because because that that that's probably when I heard about it at all. So yeah, then then we have lime serve survey. Which was already used for for some surveys, user surveys used to for knowing some some things about our users we didn't before. We have synapse which we use for pretty much bridging pretty much every every part of our chat infrastructure outside of outside of our own infrastructure. The synapse note is not quite ready for maybe for usage for everybody but but it's getting there I hope. And then there is may one free which is not yet used but it is deployed and it is being migrated as we speak. It is literally running in in the background right now. Eating away at the soul of Stasiak's laptop. Well, it's happening on the VM so it's not doing much locally. But yeah, it's it's happening. And then there were there were there were updates to red-minded other pads that were pretty much un-maintained for a while. They kind of fell behind. They had to be updated, which finally happened this year, which is great news because it's a real shame to have any piece of infrastructure that that falls behind. Because of course that that's that is a security issue and and it's functionality isn't there and it's a shame to have any piece of infrastructure that falls behind. So there also was. We also started the deprecating old old machines that that were run that are still running on on order releases like C11, C12, Fedora 24. That one hurts a little. That one really hurts. That's I don't know, that's like five years or something. It's no less than that. It's a lot of time. It's really it shouldn't it shouldn't exist. It really really should have been abraded way before that. And that's that's our free IPA machine. So that really really should be updated. We are we are planning on moving it to a scent of eight base system. It is in staging. We're just trying to do the data migration, which is complicated. Pretty much, pretty much everyone, every single one of those machines have have a plan for migration. It's just a matter of actually implementing it. It's kind of a lot of time to actually do everything. So this was the past and hopefully it will it will be the past in the in the near future. But before us is the great great future of actually working infrastructure, which is amazing, hopefully, at some point. So many qualifiers Stasiak. I feel like we're actually on a pretty good path at this point we we've got our, our house in order in terms of what applications and services we want to offer. We're at this point, almost entirely operating off of salt based management so we actually know what all of our machines are. I can say that many places cannot claim the same. So, we're, we know everything but we know a lot of things about we know everything that we're supposed to know. Yes, hopefully, hopefully. So, you should take credit for one more thing that was not mentioned I wrote it in the chat so for it minded that I use quite extensively. You know we haven't had the test instance before and suddenly. Oh yeah. That's true we now have staging instances for at least half of our applications. So we can actually play with. Yeah. Yeah, that this is useful. Yeah, I think the plant, all of our applications are eventually going to have staging instances, if not, by the end of this year by mid next year. So, where is. Yeah, so like, we don't want to test and dev and production because that's scary and also stupid. Yeah. It's kind of is. So, we had some requests. We had some requests, not a lot of requests, but we had some requests to actually have a gift for which would be useful for for hosting some code. And some of them can't be hosted on GitHub. However, you, you, you try it really, it really wouldn't work out. And for those cases, we actually do need some infrastructure that could could be a good enough gift for it for us to use. And we had some forges, let's say it was just a secret listings for for different for different projects like kernel had one and I think there was something else that I put in. There was a, there was a. He asked used to have one there. I think yes, used to have one. There was one, there was one for. Yes. Oh, you're right. Yeah, that was as VN open QA I thought had one that might have also been as the end. It's a little hard to remember all the different ones that we have proceeded to kill over the last year. So, the project that we chose for this task that that would be deployed today if I didn't, if I wasn't working on mailing lists is a Pagor, which I can't pronounce apparently you're doing fine. I mean, I think Pierre is not going to like twitch when you say it. So that that's a win. Yeah. Which is a Python free based. Get forge, which, which is a great thing because I hope everybody is able to at least somewhat patch. Python free, which which is a bonus for for any project. The the upstream is really responsive and I know firsthand after contributing to it very few times, which is a shame. I would love to contribute somewhere. And I will have to, because there are some things I have to actually add. We have as we have some similar use cases to what the current Pagor instances are used for, which is task management and and using it as a base for these kids and stuff like that. It there are quite a few things that that just just work in our favor in case of in case of using the this this base. And the API isn't isn't stupid, which is I love how this is now a bullet point we need to have the API isn't stupid. Yeah, especially about about get forges, which is, which is kind of a shame, but, but you would expect get forges wouldn't be this this down. And also integrate with with our existing existing Jenkins. So we could use that for CI question mark. Hopefully. Right. And also because Pagor supports a message bus to emit messages on it. When commits and stuff like that happened we can wire this into the existing rabbit MQ that exists for open SUSE infrastructure, and, and other services can react to events that happen on the Pagor instance, the idea is that, at least we're going to start for heroes, we're going to start with our internal assault repose, and we're going to pull the project management related to open SUSE heroes into into Pagor out of red mine to put it closer to the code and see and handle that and handle other projects that are in the pipeline that are going to use Pagor as our, as our forge for supporting some interesting use cases. And we'll talk later in this conference about, for example, OBS and get, for example. So, these are use cases we're thinking about for for Pagor in in open SUSE infrastructure. Yeah. So the next week is accounts management, which is kind of a topic. Since connects is kind of dead and Stasiek hold on. People other than me are saying in the chat that the slides are not moving. Apparently the future slide has been showing up for five minutes. Oh, I can see something change. Hey, now it changed. What did you do. I, I, okay, you can't you, you need to alt tab every time you change slides, because I think we missed the last couple of slides entirely. Yeah. All right. So, I will, I will add the back. You just like quickly flash through some of the earlier slides just in case you know, we might have missed. I can, I can. I'll tap back into the app. It will allow me. Nope. That doesn't work. Wow. Welcome. Welcome to our world. That's interesting. Oh, now it's whoa, whoa, what happened. Hey, All right, now we change again. Okay. And hopefully, All right, when you change the slide, does it change on screen. Nope. No. I love this. This is a plus. Oh, yeah. Here we go. All right, so you got the magic and cantations. All right, I don't know what I did. Connect is dead. That's, that's a simple thing to, to admit, we kind of have to get rid of it. It's on a SLE11, I think. Yeah, it's a very old SLE11 server that we're all kind of worried about. It also hosted some other things before, but those were kind of moved because that's problematic. And it's a replacement, of course, we need to look up what, what we do and what other address to reach out to us and how to contact everyone else. And what group each every person belongs to. Probably, we would like to also tie this into one system that does also the entire account management, because those things weren't the same thing before the account management and user profiles will were two different things and that's kind of problematic because suddenly you have to log in for at least once to, to the, to the user profiles service to actually show up on the service. And that kind of doesn't work too well with, with just reaching out to anybody you actually need to reach out to. So, we kind of found no gain, not really found because it's, it actually only started existing like years after having been looking for something. And it, it is a fairly new Python free based again, user management solution based on free IPA which is great because heroes already use free IP for other other things and together would be a great thing for maintainability. The same. It also shows you their profiles and groups it's, it's fairly, fairly good looking. It's, it's surprisingly good looking for, for a piece of software. I feel slightly, I feel like this is, this is you being having low expectations here. I have seen other systems. That's fair. They're not, they can't all be winners. They are mostly not winners. So, the, then the system is very extensible because of using the free IPA backend and we can very easily add some more things to it because of how free IP works. And it allows for custom fields. It's, it's pretty good at being, being an IPA. That's surprising. It's fairly active. It's hard to tell how it will look in months time or years time because currently it's actively being developed. It's, I don't think it's particularly ready yet for prime time, even though it is ready for prime time because it has to be. Yeah. I like to kind of the most important part about Noggin versus what we currently have, which is a mess. Noggin doesn't itself implement anything really. It is a front end for the existing free IPA server and provide and uses the free IPA APIs, so that there's things like self service interactions for user profiles. And then there's a lot of management things like of that nature. So, because of that the, the actual Noggin application is fairly lightweight and somewhat simple compared to the, to the actual backend which has a years of track record of being actively maintained and, and supported. It's less scary than, than our current situation. Yeah, especially since it's a fork of a fork of something that's just, just a thing. It's, it's not good. And the API is reasonable just because it uses free IPA again. So, so that's great. And also it has a fed message integration, which is also, it also could be useful. So, next we have switching slides. Hey, it worked this time. And open to the forums. And open to the forums is an interesting case because we are still using the built in for which is un-maintained and it's a nightmare. And it's a nightmare. And it kind of looks terrible. So, maybe it would be the time to switch it over to something else. And I only have one bullet point because really there isn't, isn't much more needed to explain why we would like to replace it. So, the solution to that is switching slides. It showed up. Yeah, I know. I made sure it showed up. It's discourse and discourse may be known to you because everybody is switching to discourse for some reason. I don't know why. Maybe because it's modern and easy to use. I don't know, has a lot of plugins and that means that we actually wouldn't need to implement anything else because on one hand, on the one hand, we kind of already have all the plugins we need. And on the other, I already implemented something that may be useful for us in there in the upstream. So that's very good. And the upstream is really active. It's really hard to keep up with the development of discourse because of how active the upstream is. At least the churn isn't breakneck pace insane. It's still fairly reasonable in terms of their release cadence and code churn though. Yeah. That means we may be switching to discourse in the near future, which may be a little bit hard because the emails thing where I'm not really sure how we are going to perform this migration. That's that that would be a little bit hard. Let's see how that goes. All right. So, open to the mailing list. And open to the mailing list and switching slides again. All right. Open to the mailing list are basically what you would expect from mailing list which is which is operated mostly with emails and commands in shell, which is fine, I guess. But if we could have something better, maybe we would use it. Also, the archives are really hard to look at sometimes and I would really love to switch to something else. And that means we are switching to my man free and again switching slides. All right. And my man free is really cool because it allows us hyper kitty, which is mailing is our high, high work that doesn't suck. It actually kind of looks like like forums, which is great thing because it's really a lot a lot easier to navigate them than traditional mailing list archives and also includes features like actually composing messages from the browser instead of just using email. And also composing from the browser requires login. So, of course, that means you actually have to have some login, but it doesn't doesn't mean you have to have, you have to actually log into your email to post from there you just need to log into the hyper kitty. And it's helpful for people that that are lazy. And also people that are very, very not used to using mailing lists. And there is stories which which is also great because it allows us a maintenance of mailing lists and moderation of mailing lists from the browser and also from API and from other things, which is actually really useful because that means we don't have to go go and log into the VPN to change anything and login them then to the machine and stuff like that we just have to just have to log in in the browser and change things and have them be active right away. And it's just, it's just a better solution than than the thing we have right now. And it's my migrating in the background so I am seeing the things moving. It's going to happen hopefully in a week or something, I hope it's really just saying that this is something that will hopefully happen in a very near future. And I'm just wondering that how slowly the migration for the older mailing list goes. It actually may need, may require to be the next week, and not this week how I hoped before. It's just, it's just life is, I have to deal with, with slow things all the time. All right, and that's basically all of it. I think you should join us because we are cool and we do cool things. And also with your help we could, you could do, we could do more cool things like actually deploy a Pagor and and discourse, which will be great. We didn't yet because there was no time, but with more people of course, that this could go a little bit faster. We are on three notes on my matrix and discord, and that may not fully work because matrix is not reached to free now properly. So, so that that may be a little bit dicey but there are a few people on matrix from from open to the heroes so still that that probably would work to some degree. At least I'll be there. And Stasiak will be there. I know, I think there are a few more people here or matrix to but yeah, and, and that's a thank you card. And I welcome any questions that that may be coming this way. I have one. Sure. Actually, I have to. Well, all the questions. So, with the TSP, you mentioned, moving from connect. I recently we haven't obviously since COVID we really needed needed so much but I recently looked into it and found out that I don't no longer have access to grant TSP requests and I don't think that anyone has been able to get in since the update I emailed encore about it so we're actually he said he'd take a look at it later, but I figured it was probably the migration that had something to do with it. Yeah, it might have had some. Yeah, the database was duplicated. Yeah, so shouldn't be much of an issue but they're not getting updated anymore. It's kind of it's so at this point I think. If I remember how this thing actually works. So TSP used to just yank account data straight out of connect. And now the connect is more or less gone. It's not really in use. It's not anywhere to get new account information so at this point it's semi non functional. I guess we can count kind of stars that we didn't need it this year but probably needs to be fixed before we, you know, go back to the whole well we're traveling for conferences again. And that's a point. I think that's actually I should look at it because yeah I did I did have the whole migration so because because we kind of wanted to do this quickly. We needed to kill off connect very quickly. Well, connect season dead. I know. I know. I said we wanted to I didn't say we did. But it was back when we were more more hopeful about connect being being dead. Yeah. So, so a anchor, definitely like when you have the opportunity to take a look at anchors a good person to probably go through with you. And just as I get out of the whole thing. So, but the other thing was the other question I had was with with the open source event manager I know that Hannah I always contact Hannah when I need something updated with the system. Because it tends tends to have some issues but you know he even he says that he knows it. And I don't know how we can solve it. It's always difficult to on an upgrade, because when you upgrade introduces some bugs and one more in the middle of like a call for papers or something like that which is somewhat constant. You know, you end up with some some very difficult situations that you can't really do much with but do you guys have any oversight on that or have you looked at that a little. Fortunately, OSM is not part of the hero set of applications. I believe that it is maintained. Kind of, it isn't maintained and sold. So it's kind of harder for us to tell actually how this is done. Yeah, I was under the impression we didn't actually maintain it because it is listed on the machine. So it is. We have access to it. Yeah, it is in the network so. Yeah, well so much for saying that I know all the we know all the machines that we have because because that's clearly not true. Yeah, this is this is actually. This is this is kind of kind of a shame because there are some machines that actually should probably be maintained together with everything else, but due to some circumstances, they just aren't and I think I think in time we kind of have to take a look at it and and actually fix that. I think that'll actually improve as you know, we talked about earlier about having the Packer instance and moving the salt. Salt states to to the Packer system. I think once we do that and essentially democratize the access to contribute to the to the repo. It'll be a lot easier for people to pitch in, or at least kind of figure out how to get started and and make that and feel more a part of the of the underpinnings of open SUSE because our infrastructure is is king for us it's what makes the project works as well as it does. Okay, thank you. That was just my questions if anyone else has has anything. Yeah, anyone else has got questions. We're here to answer them. Or we can just look at LCP's list of emails that he's importing, which is probably not what he intended to do. I have a good question. Oh, Richard. Yeah, what's up. Why don't why are we hosting so much of our own services and not either getting other companies to host for us or sponsor us for it or, you know, why yeah, or cloud providers, all that kind of stuff. Why, why are we sort of stuck in this old fashioned thing of you know, our own little network behind our own VPN and doing things that way. Considering that our experience with outsourcing some hosting here examples like paste or I don't know what does what does it do. Well, I mean up until about, you know, three or four months ago, the forums was outsourced. Yes, as was the account system. Exactly. And that's kind of my point is, you know, three or four months ago, can't be disappeared three or four months ago, we were saying, you know, help we need more people we've got too much work. You know, the heroes are too busy. We can't have a hard we have a hard time maintaining everything we've got. And then since then, we've added more things that the heroes are maintaining. It's really cool that you guys are doing it and you're enthusiastic but it's like I'm just kind of looking at this from a sustainability point of view, you know, sure. But a bus hits the two of you and like you say the project, you know, everything is, you know, everything that makes open through the special so you know a bus hits the two of you and opens you the stops kind of scares me. So, yeah, that's that's fair. So we live very far apart to be fair. So but it's both of us in this particular time and era. I'd be a very, I'd be very impressed, but if especially if it's the same bus. But it's a fair question, Richard. So, some of the multiple services that we have that we're that we're bringing online today are actually intended to consolidate or eliminate duplicate services that we currently have in service. So for example, we have three or four instances of random code repo hosting in open Susan today that serve no valuable reason to be duplicates so consolidating that on the Packer instance, and the end state is that we're going to migrate all the data out of red mind and into Packer. So we will eliminate about four services in the context of moving to Packer. And the mailing list services considered fundamentally critical. And so this is about a upgrade of that existing infrastructure from one that is actually broken and cannot be maintained. So the current mailing list software just doesn't work. We can't really move it forward. So that's why we're making this change now. The matrix stuff is mostly because it is currently impossible to without matrix it's currently impossible to have all the project members everywhere being able to communicate with each other. So the idea is to break down silos. So that added services. We feel is worth it, given that it helps, you know, improve the communication across across the project, but in general, our plan with these new services so like noggin is actually going to be a good way to get rid of the other services when we when we switch over to it. We are getting rid of connect. We are going to get rid of some of the other weird stuff that we have right now for for account identity infrastructure, we're going to and consolidate all of that. The goal is, we want to have a coherent set of services that support the entire project. And where it makes sense. We continue to maintain and where it doesn't. We can we can look at other options like so the open SUSE forums. I think the only reason we're currently planning on continuing to host it is because nobody wants to do the work. Nobody externally like I've informally done a couple of asks about it. Nobody wants to do the data migration because apparently it's too weird and too hard. So, and we don't really want to lose, I think what's almost 20 years worth of data. So we're doing them, we're going to be doing the data migration ourselves and we'll be at least for the short to medium term hosting discourse ourselves in the future, it may be we decide to you know, we set up an arrangement with discourse the business and and we migrate the data down to that path. We will still have the federated identity service and stuff like that that'd be a similar arrangement to what, for example, the fedora project does for their discourse instances. But since we already have to maintain a forum. We might as well maintain put something that we can actually maintain rather than something that's basically broken. So it's a fair question. Hopefully that answers it. You know, it does. Thanks. But yeah, like fundamentally we're not. The goal is to not add a ridiculous number of extra services. And in by and large we're actually I think on a net basis going to reduce the number of services we are maintaining as part of heroes, but we are also going to reduce the number of services that are managed through through something that basically anyone can participate in so the salt repo will actually know how to set up all of our services, or we're going to have like some kind of if we ever have a Kubernetes or something like that, then we will do it that way. Right now we're working with what we have with, which is right now, a bunch of virtual machines and assault and assault repo. And so we're configuring with salt and provisioning it that way. But because everything is managed in salt, and we have all these descriptions set up correctly. Hopefully, it means that if Stasiak and I were managed to be hit by a magic bus that could cross continents in the same day. Then the project will survive because there will be knowledge, there'll be maintained services, there will be configuration descriptions, and all of the projects we're using have have reasonably strong communities. So, there will be people who can help. So, that's that's sort of the goal here. Cool, glad to hear it. Thank you for the question, Richard. Anyone else. Well, actually, I have bonus answer for Richard. So, experience with MF it shows that it's easier and faster to do things yourself. My third study, but not to be blunt, but that is totally and utterly pointlessly irrelevant because MF it have no relationship with Susan in any way manner or form now. So thanks for the data point, but it's a different world now. And while it is true our previous experiences have to put it nicely sucked. That doesn't change the fact that in the future, we could find have positive relationships with opportunities to work with, you know, companies that do open source project services. I am not completely opposed to the idea of giving discourse money to run discourse. If it makes sense. And if we can have a reasonable SLA around it. That's fine. I am not sure they could provide said reasonable SLA but my experience is so far seeing it in fedora have been okay. But again, it's something worth evaluating the future I want us to be. And I believe I think Stasiak would agree with me here I think we want to be in a good place to make those to put those decisions before we start, you know, making those decisions with without the full ability to implement them. Yeah, I would agree. Someone is asking, sir, in the, how can I help. So, Stasiak, do you want to answer that question. Well, there are a lot of ways to help and that, of course, includes the basic things that that's related to all of our infrastructure so translations and and development of different websites, which is done mostly on GitHub. There are quite quite a few of repositories. I don't know, depending on what what your interest is, I could probably link you something, something more specific, but the bulk of the work that is currently done is in the migration, which is really hard to actually on off. Yeah, we, it's actually quite hard to divvy up work for for migrations because some of it also winds up being a little complicated and and granting the most challenging part of this is the part I want to fix once we have our pager instance online. The most challenging part is the catch 22 of you need to be trusted to have internal infrastructure access before you can start contributing. There is actually no reason for that to be a requirement in fedora infrastructure, for example, that is not a requirement. I have contributed and help support in fedora infrastructure projects, without ever having root access to any infrastructure. And that's some, and that that's something well, and that's something. No, they trust me fine it's just, I haven't actually asked for it, I haven't needed it. But I want us to be in a position where we can have the same flexibility. And we can have the same opportunity for anybody any geeko in the community to help with our infrastructure, because it is what helps make our project great. And, you know, the good the services we offer are the services that power the contribution. And so we want to make sure that that is also as accessible to contribution. That's sort of at least my personal vision. Some of those services that we mentioned also don't have any and we haven't had a set of self configurations written for them yet. So that can also be helpful. Yeah, once we have a public salt repo, we can actually add a list of tasks and say hey if you guys have salt interest. We can run. And then we can figure out how to do like since we now have staging and dev infrastructure for everything. We can run them on there and see if they break everything without actually breaking production. You've always testing in production. It's the only real way to do it. I mean, that's sort of fair but like, I'd like to at least have pull requests not run on production. Any other questions from folks. Well, I want to thank you all. I mean, I actually I didn't hear any of Richard said I just listened to the answer but I assume no one else is answering. Asked any more questions. Well, I mean you're welcome and Richard just asked why are we, why are we doing this more or less. Okay, well, I'll let you answer that. Oh, no, no, I already answered it. I said that basically we're doing this because we want to simplify our infrastructure to accelerate contributions and and you know, advance the open Suza community. I tried so hard to do the weird Suza tagline thing but I forgot the third word so I screwed up I'm sorry, but, but the whole point is that we want to we want to make it a better experience for open Suza contributors to it to work in the project and so that everyone can enjoy being part of the community. That's, that's the real goal. Cool. Well, thank you. And we appreciate we appreciate everything you guys are done. So it looks like we probably have a break for another hour, roughly. Nice. Yeah, at least, at least in this room. I mean that there's other rooms and there's other talks happening in room one. So, the next talk that'll take place is improving the user experience, and that'll take place in this room. And of course, Neil, you have a talk happening at same time about the OBS. Yep. And, and data. And thank you for sponsoring. Oh yeah, my pleasure. As always. So enjoy beer, hang out talking here if you want, whatnot. Sure. Thank you.
It has been a wild year in openSUSE Infrastructure, there has been a lot of new stuff replaced and updated, and with that done, we can finally start much bigger deployments. In this talk, the attendees will be briefed on the past and future plans of openSUSE Heroes, with regards to accounts system setup, mailing lists, communication platforms, forums and more.
10.5446/54655 (DOI)
And please tell me, can someone tell me if you can hear me all right and see my screen? Yes, it works properly. Okay, good. All right, let's start. So I'm going to talk about the state of open source license clarity and eventually how to help in a small way to make open source license discovery less of a problem and possibly an issue. So I'm about to me I'm software licensed nerd. I'm also code hoarder and that's probably related. I used to have 60,000 forks on GitHub. And I only have about 20 now just because it's very easy to click and fork. And you may wonder why but it's as I'm dealing a lot with the analysis of code, I like to keep the code around too. So keeping keeping a fork is good and fast way when you find a package and stumble on a project. And by the way on the license side, I'm also co-founder of a project, the Linux Foundation called SPDX. And I'm in quite a few tools in that space. So I'm a licensed nerd and I really do that day in and day out. So we're going to talk about license clarity and first a bit what's the problem. What we mean by clarity and how to create a licensed clarity matrix. And how do we deal first with license detection and then look at some clarity statistics and how we could help fix this stuff in the future. So in an ideal world, we shouldn't be having this discussion at all, right? It's a pure waste of your time and my time to discuss about licensing even though I love licenses. We shouldn't be there. The provenance licensing of all the third party software packages would be available in an easy to discover structured data format. And if some of you are maintainers of package for OpenSUSE, you know it's difficult. I'm immensely grateful of the work that package maintenance do to provide a bit of order to the mess that exists upstream. But we're not there yet there. And we are very far from knowing it all. And in fact, so some of the stats and CDID on 5,000 popular application packages. I don't have unfortunately I was hoping to have the time to compute stats on OpenSUSE and more distro packages. I don't have them. But looking at application packages, less than 5% of these 5,000 popular packages contain what I would call to be complete and unobligated. That means really clear license documentation. That's not much. And I'm sure if you're maintaining package that you understand what it is about because you have to deal downstream with that mess. So nowadays we will know, I mean we assemble complex software from eventually thousands of components. Think about node based application with NPM packages. Very quickly you have a few hundreds or a few thousand packages in a few with just a few lines of dependencies. And that means eventually you have as many copyright holders and licenses. And it's really harder and harder for the users and eventually redistributors with the OpenSUSE project or software companies to actually be able to cope with the volume and the complexity we have there. And this eventually demands automation. So being able to fix clarity is probably the only practical way to achieve any kind of compliance with free and open source software license at any scale. And if you care about these licenses and you care about having some minimal respect for them, it's important to be able to comply. And so just a few anecdotes on what the problem can be. I'm sure you've seen, a bit less lately, but you've seen some distro packages that add license that says it's very distributable. That's great, right? That means a lot. Whatever it is, it's not really reassuring about the exact terms and conditions of the license. It's pretty fuzzy. It's very common to see repositories, maybe not the most popular ones, but to see packages that upstream that don't have any license information. Or we can also have funny or hidden licenses. This is an example on a slightly older kernel driver, something I discovered which has been fixed since. Where you have a thermal driver, which was distributed under the terms of the GPL. And just as an aside, when we started doing some work, I started doing some work to help clean up licensing in the kernel. There were about 800 different ways to say this file is under the GPL, just in the kernel. The kernel is big, it has long history. We found a lot of weird and dusty license in the corners and a lot of cobwebs that eventually were cleaned by the maintainers one by one. But if you think about a funny license like that, that's a pain. Or there's a one about your license written by someone which was likely GPL shy, where GPL V2 is written in ASCII as opposed to plain text with ASCII code. Which is really a way to hide the fact it was under GPL. So these are the kind of things which are problematic. It's just a few anecdotes, but at scale, it's a total mess. Now, where do we get license and origin from? We get it from package manifest and build script. I mean package manifest that could be a spec file for an RPM, the control of copyright files, the devian packages, devian package, all the various package format that exists for application packages like cargo for Rust, package.json, setup.py, and setup.cfg.py, Ruby, RubyGem spec files, and so on and so on. So that's one great source because it's sometimes structured. Some projects, and Suzy in particular is trying to use now SPDX licenses, NPM is doing that too. And that helps bring a bit of order and clarity. At least it helps pinpoint the exact place even if the content is not structured. It helps to know that this field contains a license information. That's already a big thing. And beyond that, there's a bunch of license, noties, tags, text, mentions that exist a bit in any kind of way and shape you could think of. You need it. Some of the examples I just showed are actually interesting in terms of variety of notices. And there's a lot of indirect clues which may help pinpoint and provide insight about the actual origin beyond explicit information. It could be emails URL, think about the link to a stark overflow article question answer or snippet, or a guest, a basebin, this kind of thing, which could be all clues that could be used to infer where the code comes from. And once you know where the code comes from, you have a better chance to figure out the license when there's no license information. And so, passing license from package manifest, it's a pretty simple technique at a high level. Assuming you have a package that comes with structured provenance and license formation. And that's the case nowadays for many repositories. So we talked about Pipeye, but RubyGens, of course, Suze, Redad distros, and Debian and Ubuntu also provide structured package manifest. But if you look at scale and especially on the side of application package, only a subset of the package may contain actual declared license and provenance data. When I say only a subset out of the 5000 package, I've been looking at the overall median average license clarity is about 45 out of 100. And we'll see in a second what the clarity means, but there's really only a handful of this roughly 5000 package, about less than 200, that had license clarity score that you would consider as not perfect, but pretty good enough in most cases to go and run with it in terms of the availability of license documentation. At this stage, I want just to pause for a second to make sure that everybody can hear me, alright? And if somebody can confirm it with voice, I want to make sure we're not... Thanks. Good. Yesterday, I've been disconnected when I was making a presentation, and I've been talking alone for 25 minutes. I want to make sure it doesn't happen again. Okay, so the second side of the equation is looking at licenses, not in structured field of manifest, but as they may exist in the code. And there's various ways, pattern matching, probabilistic, text matching. Essentially, it's a problem of text matching, and the most comprehensive way would be to do a DIF, and using genomic inspired techniques of multiple text sequence alignments. And eventually, the tools... the tool I'm building, which is called ScanCo Toolkit, is using the third technique and a bit of the others. Most everyone else, well, that's the only tool I know that does that. It's hard to do at scale, but that's the only one that can provide really a correct approach. Pretty much everyone uses approximate techniques using probabilistic approach, and or finding small patterns, which may be a tell-tale that this is this or that license. So, what do we mean by license clarity? If you think for a second, and you're about to use a new repository or new package, say it doesn't come from a distro, and you want to make sure that there's no license issue. So, if there's a license that's present at the top level, ideally in a package manifest, so say in a package.json, or in a readme that's very clear on a file called Copying, that's what I call a declared license at the top level. Clear would mean that there's no ambiguity of what the license is. It's actually detectable by a tool. The second criteria would be if there's license information, license notice, or SPDX license identifiers present in the source code. So, there's no ambiguity, especially when you have multiple licenses that apply to a package, which files are which license? If you think about something like GCC, which has, I don't know, many files, but probably not far from the size of the kernel overall in terms of volume of code, it uses stands of different licenses and being able to know which license apply to which file or which group of file. You may have command line tools, you may have libraries that are under LGPL and so on and so on. So, it's important to know at the file level that it also helps to reuse the code for your users, file by file if they want to, because license stays with the code then. The important things here, you want to make sure that the information you have at the file source code level matches the thing you have at the top level declared level. There's a discrepancy. You want this to be consistent, otherwise that's a mess. And that's unfortunately pretty current, pretty common as the case. The other thing is you want to have well-known licenses. You don't want to have to scratch your head. One of the big benefits of what we've been able to achieve as an open source and free software community at large is to agree on a certain number of common licenses which are the GPL, LGPL, MIT, BSD and Apache of the world. There's a lot more, but there's a handful. And if you look at the complexity and the variety of license in the commercial and proprietary world, we've done great because frankly the proprietary world is a total mess. It's almost like every contract is a different license. So having well-known license which are not head scratchers, which don't require interpretation, which are well-known quantities important. The thing we use there as a proxy is to say it's a license that's been referenced and is known at the SPDX project. It's not a great thing. I mean, there's maybe some license which may not be referenced there, which should, but eventually over time there would be. It's definitely a good neutral proxy to say, hey, if the license is known there, then it's well enough known as a quantity. And the last thing is that most of the license require somehow to produce the license text for a tradition. You want to make sure the license text is present. And these are our five criteria. And on top of that, we build the clarity score where each of the elements is receiving a weight. The presence of file level, license and copyright is progressive because we're looking at the ratio, how many files have a license, how many files have a copyright out of the total of the files present in the package. And all the other ones are binary in the sense that either you are consistent or inconsistent in terms of license. Either you're using standard or you're not using standard. Or you have license text or not license text. So there's a bit of a ratchet effect. But in practice, it proved to be fairly well related to what I would like to see when I see a project license documentation. Now, if we look at the scores we have on 5000 application package, they're sorted by, well, they're grouped by package type. We have gem, maven, npm, Nougat, and PiPi here. That's the one we picked as an example. And you see the median score and the average. And you see it's pretty poor in many cases, right? Maven in many cases is extremely poor. In fact, the fact that Maven has so many binary only packages and little documentation. Any that comes with the binary packages makes it really difficult. Nougat, same, so mostly the libraries for Windows, same thing. Pretty poor. Gem, npm, and PiPi, a bit more mature and well-traveled, doing definitely better. But there's a lot of differences and discrepancy when you look in the small. For instance, npm, which is reasonably more recent than PiPi, and has made it a standard to use SPDX license expression to document top-level licenses. As a result, he's using many more licensed, well-known SPDX licenses than, say, PiPi. Yet, you see that there's a big problem of consistency where we may have a high level of declaration. Almost 960 packages have a top-level decode license. But they have almost only 8% of the files that carry any kind of license information, and the corresponding consistency really lacks quite a bit. Even worse would be gems. For whatever reason, folks that write truly don't like to write comments at all, and don't put any comment at all, and even less so license-related notice and comment in the source code. So there's a few more statistics which present the percentage for each of these scoring elements. But in this case, the top-level package that have percentage, for instance, with SPDX, we see more clearly here as a percentage the case of NPM, or how rarely enough, and that's a good thing, you have about 26% of Python packages that provide the license of the package. And this may be because it's actually a standard thing when you build a source distribution with PiPi and Python for PiPi, that it includes and will automatically pick license text with a well-known name to be present in the source distribution that's built and then upload to the package repository. So that's it. There's much more to be said about the details. There's a full report that I will upload also there, which is available on GitHub. A lot of that has been done as part of a project called Clearly Defined, which is at the OSI and the HeavyDiv sponsored by Microsoft. And on my side, I'm maintaining these tools, not Microsoft, which are used there to do scans, in particular, that's a joke. Okay, well, ScanCode provides a lot of features to scan NPNs. And what you see here is that GitHub made the stupid alerts because we have a bunch of test files, which are node packages and they're completely useless. Sorry for the segue there. So check out ScanCode toolkit. What we're trying to do in terms of next step is practically a couple of things. We're building a ScanCode IO service, always free code and data, to help scan and compute this license and score and make it more easily, readily available to everyone. Eventually, there was a presentation a couple of days ago by one of the maintainers of release monitoring. That could be something that could be integrated right away there. So as a maintainer builds a package or a new package version, having the license information right on hand and understanding how good or bad is the license documentation would be really useful. The other thing we're doing is leverage. And leverage means rather than trying to fix one package at a time, which I'm doing, and so the folks from Clare Define are doing quite a bit too, they're trying to create and in some case help upstream to fix their stuff. They're trying to work with communities to fix things at large. So I've been working for a while on a PEP for Python, which is being submitted now and reviewed as a draft. The goal is to help structure the license field used in Python package metadata. So that we can use structured SPDX license expression, which is something we should always use in OpenSusie. But it can go a very long way to provide again clarity in the license declaration. And third is trying to do some outreach to like-minded license nerds. There's a group at the Linux Foundation for the OpenShine project and around SPDX also, which are there. So we're trying to build a bit of a community around that to eventually help make this a non-issue. And sometimes maybe a couple years from now, I could come and make the same presentation saying it's the very last time we're talking about license because it's longer an issue. And that's it. So now I'm going to take some questions if there are questions. So Richard Braun is asking, have you looked at Cavill? I'm pretty sure I have a fork of Cavill, but again, I'm a code hoarder and I have 20,000 plus fork. I've looked at Cavill. I know Cavill. And if we look at here, oh, no, I don't have a fork. I'm sorry. I should have one, but I'm pretty sure I have a clone if I don't have a fork. So Cavill, if I recall, I looked at it even recently, actually, it's built in Pearl. I think the thing about Cavill would be great if you were to consider using, for instance, scan code toolkit as an engine for the license detection probably would be great because it will help you probably get a better set of license detection and what you can get there. I may be having a lot of fruberies and saying it's going to be better, but I think it's going to be better. I can actually test it. Well, I tested it a while back and Cavill out shone, especially with those lovely complicated ones where you've got, you know, like the examples you gave where, you know, declared license is one thing, actual license is like 20 different things. But, you know, it wasn't like, you know, they're both aiming to do the same kind of thing. Yeah, it's really worth having a look. If anything, the things you can scan from scan code, even if you don't want to use it, to give you an ID, we have a database here of 20 plus thousand license text and license notices, and the drive drives the detection and the deep that scan code does. Sorry, not the text. Where is it here? And so you have a pair of the text file with the extension and the YAML file. So, for example, of a small license, which is something that you found for whatever X binary in the, it looks pretty much as a BSE license, but that happens to be some license and you have a small YAML file side by side. So you have about 20,000 each of them, which can be useful in their own respect if you have any kind of pattern matching. But the tool is not too shabby. It was selected by the next corner maintainers, in particular Thomas Gleisner, after having checked everything else because it found it was doing the best there. And so, Cavill, I've not looked in detail, but definitely if you want to discuss Richard, I look forward to it. And if there's anywhere I can help there, I'll be glad to. Any other question? Okay, so I can take a minute to fork Cavill. I'm pretty sure I have a machine learning element of it now. So there's a whole machine learning engine in there too. That does wonders now for like, especially those complicated ones. So, you know, figures out what previous reviews were marked as acceptable or not acceptable, and does like a whole bunch of assessment itself, which is scarily good. Good. Well, that's great. I look forward to diving to it. So the approach of scan code is pretty brutal, which is to do a pairwise deep between all the license we have here, against and license or license notices, against all the files that you scan, and do it multiple times because you can have a license that show up more than once, and you can have more than once, one license in the file and one license may show up more than once. So the only trick there is, if you think about, there's about 20,000 license files or notices, take a kernel which has about 60,000 files. Before it was converted to SPDX identifiers, you would have to do 60,000 times 20,000 diffs. But you want to do it more than once. So you say do 200,000 times 60,000 diffs. And which would mean even with a super fast diff you could take a couple months, and you could have a thousand machines and do that in a couple hours, which both cases are not practical. So the whole trick there is to be able to do this kind of diff and execute any kind of probabilistic approach, but do it in a reasonably efficient way in the very specialized case of license detection. Now, on top of that, if you have a way to do disambiguation with machine learning, I think that could be helpful even in this case because you'd get more data and more labeled data as an input to your models. Okay. Okay, so Daniel is asking questions, says don't push, doesn't push the onus onto developer to reverse parse the SPDX tags to read the license the code is under. Well, yes and no. Take this license we're just looking at for a second, right. Whatever is this 3M microcode license about. It doesn't have a well known name, right, if you see something which says, okay, 3Com microcode, that's not something that rings a bell. And if you see something which has a well defined BSD3 closed BSD2 closed, there's no beauty about it. So being able to label a whole license, I think makes it easier, cleaner, simpler. Again, it's not much different than doing a, when you use variable in the programming language, you substituted the whole license text by a variable, and you're sure that you use the variable name as opposed to use the full text or notices and variants of that. The thing that if you think about what you have today, so if you don't have an identifier or tool to detect that, if you see this license, your first reaction to say when I read that just at the first glance is to say, oh, this is a BSD3 closed BSD3 closed BSD3 looks pretty straightforward, bland, no problem. Except that, okay, that's really easy, that looks at it. And oh, okay. And then you have whatever here is something specific about track comp, which is by the way, that company, but there's something which I have no idea is about where I need to sign probably my third born child to Stricom to meet this requirement. That's the purpose there. Using shorter code, I think will always be easier than using longer text. And it's a way to avoid reading the license text. So that makes sense, Daniel. Okay, so I'm, if there's a question, I'm here. Otherwise, I think it's late for a lot of us. So we want new code to be under a good license. Yes, yes, we definitely want the Yes. Yeah. So Yeah, so I'm gonna, I'm gonna look at the drivers. There's been, are you talking specifically Daniela? Yes. So you're talking specifically about the kernel, right? I see. So let me check. Let's see if my mic works. Can you hear me? Yes. I found the unmute button might be quicker than typing. No, I just jumped in because you know, user source Linux is, is on my laptop. And just looking there, you know, that and doing a grep for the tags. Yeah, that file just has that SPDX tag and says GPL 2.0, which is very clear for a machine. But for someone taking a piece of code and using that as inspiration to develop something else, which means that that, in this case, viral GPL must be propagated onto the successor code. And I'm slightly wary that some developers maybe will, will not understand the ramifications of taking a free license and how they should make sure that their code is similarly licensed as a derivative. Yeah, you have a good point. There's, I've already, I've read a few folks having the same, the same problem there, which is if you think about, well, probably not that make file, which probably didn't have that at all. But if we look at the bigger file like that, and I look at the, the log for this. And we look sufficiently far back, maybe here for a lucky. Sorry, my screen is not updating. Are you updating your screen? Oh, let me make sure I share. I stopped sharing. Sorry. I just have a big finish. Thank you. Yeah, that's, that wasn't me. That's the previous talk. So if you look at my screen, you're probably able to see it right now. Yes, I do now. Yeah. And if we're lucky. Oh, that was already in SPDX thing. But so in the past, you had a lot of longer notices. And, and so they have, and they had, you could say, definitely an educative quality and value, which is to inform more permanently about the fact that it's a free software license. So definitely that's, that's a bit lost. What's lost also is all the boilerplate about licensing, which I have no love. I have no love for in a sense. So here there's no comment, but more fun. It's more interesting to have a bit of blurb that explains what what the code is about then having a long description of the license. And the other thing in the case of the kernel is that they were a truckload. And I won't use French words truckloads of licenses and really weird stuff. And so avoiding variety, I think it's easier to understand. So the other thing is that there's a project that's been working closely with that is a project from the Free Software Foundation Europe called reuse that software, which provides a bit more than just the way to use this license to fire but how to also provide documentation for the different licenses, some of them which may not be known from SPDX. And so that's all documenting in the documentation of the kernel and the process documentation. You have what's called preferred licenses. And you have other weird licenses. No, GPL10. So there's quite a bit of them, unfortunately, which is the default when there's no version specified. That's what the GPL says and older MPL11 and a bunch of exceptions. And these are the common licenses. There's many more than these. But in the documentation you have, where is it in the process. And license rules. So you have a whole document here, which explains how it's being used. And I think it's better to have it in one place than in 60,000 place, frankly. I cannot fuzz on one reason why, well, the one you made is a good point, which is you're losing a bit of the education, the education, the quality. But in the case of the kernel where you have so many contributors, I think it's a good thing to have something which is more structured than freeforms. The ultimate kind of defense to this or the big steak, if you like, is having seen the number of our peers who have been bought over the years that when company A gets acquired by company B, company B's very expensive lawyers then make company A prove the provenance of every license and every piece of software that they've got in house because company B is very, very afraid that they will buy a real can of worms. And I think that's why things like Black Deck, for example, in the kind of commercial space have become a lot more widely known and widely used at that acquisition point. So yeah, if somebody's then robbed a bit of GPL2 code, made it their own, deleted all of the headers or whatever, something like that will throw a big spanner in an acquisition. Yeah, yeah, that's a good point. In practice, you know, I've been involved quite a bit in merge and acquisition due diligence for open source. We're competing against the company you're talking about, but we're competing with free and open source tools, which is rather different. So we have also a very different approach on the way we address the problem. In practice, what you see is really, there's really a big issue of people boring and removing GPL headers. It's a very rare event. What happens more often than not is you have developers that are either... So there's not much malice, it's either incompetence or ignorance or a bit of both. And usually when you have things like that, it's mostly that. Most of the time what you see is people, when they don't know about the license, they leave a telltale, they say, oh, I got this code from there. I don't know where it is or... But at least you know where it's from and you can then further dig. When you have incompetence and or ignorance, it can be a problem because I've seen, for instance, companies that wanted to keep the drivers they were building for the hardware proprietary, which I don't subscribe to, but that's what they wanted to. But then they were building everything in Kernel space where they could have done a bit more code in user space and not benefit maybe of all the performance of Kernel space, but be able to keep their proprietary status there, which is... So ignorance and being able to make the right choice, then in this case is usually the biggest problem. Very rarely malice. And the thing today is not even ignorance or malice is frankly the speed at which everybody is releasing new software and the volume of packages that are being used means that it's a numbers game. You want to make sure that at large you know more or less what you have and more or less what the licenses. And if you're able to do that, that's pretty good enough in many cases. And it's pretty hard to achieve quite often. Okay. Thank you very much. Okay. And I think it's really the end of the day. Oh, Ben Cotton. He says that's weird that it's still showing my last slide. Okay. Yeah. Yeah, but you can see my screen too, I hope. So I guess that's it. Everyone. Thank you very much for your time. And I wish you all a great day, great evening or good night. And now.
In an ideal world, the provenance and open source license of third-party software would be available as easy-to-discover structured data. We are not there yet! We will review a detailed study on the clarity of licenses documentation practices in 5,000 popular open source software packages and infer the state of licensing clarity globally gained from the insights and statistics of the ClearlyDefined project data gained from massive license scans with the scancode-toolkit. And we will discuss what can be done to improve this situation. I will present the state of the license documentation clarity in the open source community at large through the lens of: - the introduction to the license clarity metrics we designed for ClearlyDefined and in the scancode-toolkit - the presentation of a study of the license clarity of 5000 popular open source projects across multiple programming languages and ecosystems - an overview of the statistics on license clarity across 10M packages - a specific review of the licensing practices and license clarity statistics in openSUSE packages
10.5446/54659 (DOI)
It's noon here, so it's time to get started with my talk about the uni. Let me share my screen. Okay. You shall now see it. Can everybody see my screen? I will assume a yes. Okay. So, welcome to the second day of the OpenSUSE conference and LibreOffice conference, and let's get to it. My name is Pao Arthea. I am the proud owner and technical project manager of SUSE Manager, work of SUSE. I used to be a devian developer, a KD developer, and I have long story in open source. Since probably more than 15 years ago, I'm available, usually can be found in a free node or a Gitter, and of course by email. I'm going to talk about the uni, because being the proctor of SUSE Manager makes me, in a way, the benevolent dictator of uni, since uni is the upstream for SUSE Manager. So, what is uni? It's a systems management solution. Okay, when you have tens, hundreds of thousands of servers, you cannot just hack your own scripts or log into each of the servers, but you want to have some kind of automation, something, and even dashboards, something that makes you feel that, and gives you proper information about the status of everything. We can manage all kinds of workloads for a single place, so it's single systems, different Linux operating systems, even clusters, or other products, like Kubernetes clusters can also be managed, the degree from SUSE Manager, from uni, sorry. We have reporting and auditing capabilities, both from the web UI, our web UI and command line tools, and of course there's an API. We can do software and hardware inventories, which is very useful for compliance, or even just to know what is going on in your organization. We uni can do configuration management, it's something that you will do, for instance, if you use an empty virus and you want to deploy the burst signatures to all your servers, then you can use a configuration file, it's typically a JSON file, XML file, with definitions that you can deploy to all your servers using our configuration management capabilities. Units also have some KVM and Zen-based utilization features, it's not nearly as powerful as VMware or anything like that on Nutanix, but if you only need some limited utilization, like for tens of different machines, this can be an option for you. This is the architecture of a unit. It's a typical client server application, where we have introduced this element in the middle in case you need to offload your server. So if you have tens or hundreds of clients, you can attach them directly to the server. If you are in the thousands or tens of thousands of clients, then you already, if you are on locations, on sites where you have bad connectivity from the client to the server, this proxy element can help you offload because it caches, it acts as a case, as a catch. It doesn't really do a lot of management by itself, the intelligence in a unit lives in the server, but the proxy is very useful in many cases. The packaging of a unit is a project by Red Hat called Spacewalk, it's what they used originally to create their Red Hat network, their customer portal essentially. Then they released it as open source and made it available for customers to install on-premise, or for the open source committee to install on-premise, and then they took it 10 years ago and from there we started adding features to more focused on SUSE systems and also modernizing it. Spacewalk is a very old project, it started around 2008, probably a bit earlier if we count for the version previous to what was released, but let's say it has 12 years. It was the base for Red Hat Satellite 5, now Red Hat Satellite is at version 6, which is based on something completely different called Catello. We didn't believe in that path in SUSE, so we continued to Spacewalk, in the form of a unit, I will explain why. And SUSE Manager, version 3.2 and earlier was also based on the original Spacewalk project. Spacewalk is already that, died almost 6 months ago and Red Hat officially regarded UUNI as the continuation of Spacewalk. Now why did we fork Spacewalk? Because Red Hat didn't really believe in Spacewalk anymore, at some point they started contributing code or started a new project called Catello. You may have heard of it, Formum-Pulp, several components, it's a different model, but Spacewalk worked for us, Red Hat did not hand us the project, so we forked it in the form of UUNI and we made official some of the features. So one of the most important features that UUNI has versus Spacewalk is the two stacks, there's the original Spacewalk stack, what we call the traditional client, and there's also the sold stack, which is sold the open source project. And then you can see the game with the words, play, the video, so it's sold, then UUNI is the largest, sold flat, yeah, that's a joke. Although we support the traditional client, still because it allows customers or users to migrate from Spacewalk or Satellite 5 to UUNI and SUSE Manager without converting to sold, and that's a transition path that many of them use, all the new development goes into sold clients, what we call the minions. It's okay to say also the soldest rates, clients most efficiently, it is also supported agent less. So while I say two stacks, it's actually three ways of managing your clients. So you have the traditional client, which requires an agent, there's also a sold minion, which also is an agent that is sold as age, which is kind of like Ansible, agent less. Not all the features are available with sold as age because of some constraints of the features that require continuous reporting to the server, but many of them are. So if you are looking for agent less, this is also a solution for you. Another of the features that we added in UUNI is containers and Kubernetes integration, you can create and scan security audit containers from UUNI. We also extended scalability a lot. So in the past, it's a spacewalk was we got a solution for thousands of clients, so they say single digit thousands, no more than two or three thousand clients like for instance, Catello, it doesn't scale beyond a few low digit thousand clients. Now in the spacewalk with a single server, we have done up to almost or let's say more than 30,000 clients. It's actually beyond that, but 35,000 clients, we know it can be done because we have some cases of that which simplifies maintenance and management. We have also spent a lot of effort in usability because some things are not so easy when you have a ton of features like we have and many cases. I'm always surprised at how differently different users can use UUNI. It's surprising. So some people have their own situations because firewalls, network policies or company policies and they find their own way and then they come to us and say, really interesting and then we enhance the UUNI to make that easier for them and for everybody. We have modernized the web UI which is based on JSB and an old framework called struts. We have added a React web UI so the new pages are written in React and some of the older ones are also being replaced slowly. We have modernized the code base and it now uses, supposedly Python 3 and Java 11. And this is the upstream for Suzy Manager of Psyll's version 4.0. Actually, Suzy Manager 4.0 was released only last year, so a year ago, but in June 2018 after the release of Suzy Manager 3.2, we no longer used Spacewalk as the reference for Suzy Manager but UUNI. So what can you do with UUNI? We can of course do system deployments. You can do patch management. You can do service bug migration, like migrating from SLS 15 SP1 to SP2. You can do configuration management. You can do bare metal provisioning or of course, you can do machine provisioning. You can schedule ask your service to this performance system. That means you have several actions that you want to be done to perform one after the other, including reboots. Then you can do that. You can use it for compliance, the CVE audit and open scape capabilities. You can see your dashboard with all this information and fix the vulnerabilities in just one click. And there's of course API that typically the users with large deployments, the thousands, prefer to use the API and get rather than using the web UI because it can become sometimes more confusing when you have that many systems. We have added some features because all of these are available in Spacewalk or most of these, but then we have added in UUNI more things like the transparent integration with Salt. This is something that makes UUNI a lot more powerful than Spacewalk was because with Salt you get a lot of possibilities for automation that you didn't have with the Spacewalk. You can use it to manage on-premise cloud or hybrid cloud and multi-cloud system. So we have these cases and there are even videos you can find on YouTube, some of them by SUSE. Anything that you see for SUSE imagine works also with UUNI where people are using UUNI to manage CentOS and Rail and TLS and Voodoo and Oracle Linux. Some of them are running on-premise, others are running on Azure, others are running on AWS. It's just working from the same console and with the same reporting and everything. So one single dashboard, one single management plays all your systems. We have this cool feature that we introduced last year which is ContaLisac Management which is very aligned with the DevOps way of working where you define stages, development, test, production, for your software channels and you are promoting the packages, testing and then deploying gradually to your system. You can create groups of systems, so a test group, deploy there or a canary group, then when you are even going into production and instead of having surprises then, this just works and it's a very visual way. In the past, you could do this in a not so easy way from command line and applying scripts, but this is so much better than when you try this, you never want to go back to anything regarding the scripts. You have recurring actions, you can execute several times. You can build always images and container images from the packages that the UNI server has mirrored for several operating systems. You can also compliance, also even for subscription matching in the case of SUSE systems and products so that you know that you have enough subscriptions and no more subscriptions than you need. You can even optimize the number of subscriptions you need. It can also do some visualization and it can do monitoring. We have included the Prometheus and Grafana stack, including Federation and Reverse proxy which are some very cool features because they allow you to aggregate from different sites, from different products because you could use the UNI server and the UNI dashboards as the authoritative site, the authoritative dashboard, even if you're using additional stuff like a Kubernetes cluster or some other monitoring on the cloud that is not even managed by UNI, you can bring that to the UNI dashboards. And we have these cool features called the formulas with forms, which is essentially solved formulas with a visual aid. It's just a YAML template with a few placeholders that you can fill and this doesn't even require programming skills to create them, but it makes using solved stuff a lot easier and a lot more convenient. So where are we with the UNI? We have done a lot since we were open source two years ago, so the reportatory is public, the development happens in the public, everything SUSE does first happens in the UNI before SUSE managed, so UNI is always slightly ahead of SUSE manager. We are available, there's mailing lists, there's, well, actually the RSC channel is deprecated, we now prefer Gitter because it's easier for people to join this web, ILC, let's say, this web chat. We have a CI with Jenkins, it's not completely public, so you can see the output, you can see what happens, but then if you need to go into details, that part we have not managed to make it public because it runs on SUSE servers internally. The base operating system for UNI these days, since three months ago, is OpenSuselip 15.2, that upgrade happened in the UNI 2020, so after the version of OpenSuselip is released, we always move to it for the server. You can of course still manage OpenSuselip 15.1 systems on 15 clients, what are the client-opening system supported by UNI, a ton of them, that's one of the differentiators of UNI. We of course support any supported version of SLEE, OpenSuselip, Red Hat, the Praslinux, CentOS, well, the mirror of clones that there are, REL, CentOS, Oracle Linux, expanded support. There's reports of Fedora being used, although some features in Fedora may break if you are not using our sold packages, but the packages provided by EPL, so you need to be aware of this in case you want to try Fedora. This is a community contribution, it's not actually yet part of UNI, and there's some limited support for Amazon Linux too. We will probably enhance this in the not so distant future. And there's of course support for non-RPN based operating systems like Ubuntu, Debian, and something very weird called Astralinux, which is a Russian version of Debian. So what has happened, so we are almost at the end of the year, what did we do in 2020 in UNI? We went from two releases a year to one a month essentially. So the only months we skipped were January or actually February because we released on the last day of January, so it didn't really make a lot of sense to release in February. We didn't have that much to release, and then August because we needed summer vacation. The next release will happen in around two weeks from now, so at the end of October, we typically release at the end of the month. There are butyl machines and cloud images available for a lot of kinds, so it will be Azure, Google Cloud, KBMs, Hyper-V, and OpenStack. They are not yet available in the marketplaces. We don't know when we will do this because it involves creating accounts on the marketplaces, on the clouds. So these are some implications that we have not been able to deal with so far. We have a Qtas channel, which is the fastest way and the best way to contact us for, say, immediate or real-time information. There's also, of course, the mailing list, and we have the UNI Community Hours. This is something that we started around May, and every month, on the last Friday of the month, 4 p.m. European time, we present what's new in the new release of UNI. The next UNI Community Hours are happening on the 30th of October, 4 p.m. European time. Check the mailing list, the UNI mailing list, and you will find the invitations. This year, we also participated in Google Summer of Co. We had a very good result with a student who contributed to the documentation theme and the multiple language support. Oops, I just disclosed something. Translations. More features that we have added in this year, the hub. So I said that with a single server, you can do more than 30,000 clients. But what if you want to go to hundreds of thousands of clients, or even a million clients? Can you do that? Yes. Since this summer, we have something called the UNI Hub, which allows you to orchestrate several UNI servers. This is not yet complete, but there's two parts of it. There's the XMLRPC API, which allows you to manage lots of servers and lots of clients from the API, which is typically, as I said earlier, the case that you are going to want. Because if you have 200,000 clients, imagine that from the web UI, well, good luck with that. It will be confusing. You probably want to use the API, and that's why we implemented the API first. And then we have also solved the states that allow you to keep all the users and groups and organizations in sync across the different servers, all managed from the hub. We are implementing more. So the other thing that we implemented was something called activation keys, which is a token that allows you to bundle, let's say, subscribe servers to the same channels, to the same configuration files, to the same things. It makes it easier to keep different servers in sync. We also added maintenance windows, which is essentially a must if you are in any enterprise environment. You cannot just do actions whenever you want, and you don't want to allow for accidents to happen. So with maintenance windows, when someone schedules an upgrade or patching servers from or even building images, at a certain time, a certain point in time, if that point in time is not within the authorized maintenance windows by the organization, then those actions will be rejected, and you will be told, hey, you cannot do this outside of a maintenance window. We have the recording high state, which means the high state for those of you unfamiliar with salt is essentially the whole state of the system, all the packages and the configuration files and everything, and the network configuration of your system. So a very interesting feature is when you want to make sure that your systems stay compliant, so that no one goes to a server, someone may have access to a server individually, and they want to install some software. And then that system is non-compliant. With the recording high state, you can make sure that your systems always stay in the compliant state. So in what your organization mandates, and there's nothing outside of it, and if there needs to be something that's different for one or more servers, then you want to make that a different state, and you want to be aware of that. So this is very useful for compliance or for patch optimization or for it has lots of different uses, the recording high state. We also introduced a new installer framework if you have ever tried to install systems in an automated way with Auto-YAS, or with Kiki Start, you will know that it's not exactly easy to write these profiles, these Auto-Station profiles. Yami is sole-based, so you just need to write a few YAML files. It's very simple. And in the Unity, we have the formulas that provide the UI for this. So you just need to have this wizard where you can configure how you want to partition your disk, what kind of software you want to install, what systems you want, how you are going to boot those systems. And Yami and Unity will take care of everything. It will run cobbler, behind the scenes, and network boot your computers. We also introduced storage pools for the Unity session features. So now we have some machines not only with file-based storage, but also with these storage pools and ISCASI, SEV, and all of these possibilities. There's also EFI, HTTP boot, so not only Pixi boot, this is useful. For instance, we have a wireless boot, or a machine is connected by a wireless. There's also, by the way, USB boot is also possible. The single sign-off from the web UI is useful. You have, for instance, Active Directory, or a Directive Directory, and you want to connect your... I keep saying so much because I do lots of presentations. Your Unity web UI to your Azure Active Directory. And we introduced new formulas. We have now probably around 20, 25 formulas. Took a few, an OpenVPN server, took a few of the CPU mitigations. If you are scared of these Intel's CPU problems, Spectre and everything, we can deploy Prometheus and Grafana. Even Yomi is also a formula. There's a ton of different formulas. I will talk about that more because we are going to introduce more formulas in the future. And in case you are using Rails systems on the cloud, pure in cloud, you don't have a Red Hat CDN account. Then you can also manage those systems using a custom header that you can add to our reposing, which is the tool that mirrors the packages. We also introduced monitoring. The Prometheus Service Discovery is something that we have introduced in Prometheus. You are looking forward to submitting this upstream now that they have lifted the... There was a small period where they were not accepting any more service discovery features. Now that's allowed, we are going to submit it upstream so that when you have different... All your managed systems, we can auto discover them and deploy monitoring and then show them in the Grafana dashboard. We also have Federation, which means when you have different sites, each one of them within some Prometheus server, because you have weak links, weak network links across the sites, and you don't want to flood them with network traffic, with monitoring traffic, you aggregate the traffic in different Prometheus servers and federate all of that in a top Prometheus server. That can also be configured by a formula, by the way. You don't need to... It's really just filling some values, essentially the URLs of the servers, and that's it. We also have this reverse proxy for Prometheus, which simplifies the setup. So typically the way Prometheus works is you have exporters installed on the client machines, and for each of the exporters, it requires its own TCP port. Now that, if you are in an enterprise environment, is not exactly easy to deal with for security apartments, because they don't want you to install three Prometheus exporters, each of them opening a port. So this reverse proxy, the Prometheus exporter, as it's called, will put all of those exporters in a single port, so that for the security authorization, it's much, much easier. And also, even if you want to secure your systems yourself, it's also easier. You just need to care about one port. We offer Grafana dashboards for UNI itself and for CAS, which is SUSE's Kubernetes distribution. There's also some integrated self-moving, there's a little dashboard in the web UI itself, so not as a separate Grafana dashboard. And this is something that may look stupid, but happened a lot. The way your unit works is if you want to manage CentOS 7 and CentOS 8 and less 15 SP1 and Wuntu, you need to mirror all those products, the whole thing. That requires hundreds of gigabytes of disks. Sometimes people do not notice and they exhaust the hard disk, especially the database hard disk, if I system where the database resides, and then the database becomes corrupted, or the packages become incomplete, and all hell breaks loose. Now we implemented a warning, and we stop the UNI and stop mirroring packages and do not fill more disks anymore when there is less than 5% of the disk available to make sure that your system is not destroyed, unintendedly. We have added CASP for as a client, we can manage the whole cluster, in fact. And again, this came as a committee contribution and Relate was added also in 2020, including container cycle management, which supports the AppStreams, which is something that if you have dealt with AppStreams, you will know it's not exactly easy sometimes. Sometimes you have conflicts between AppStreams or packages in different AppStreams, and sometimes you have to worry about it, or if you try to apply a filter that conflicts with something else, we will worry you and we will not allow you to create an invalid repository. I can tell you that this is something that people like. When we say Relate, it includes all the clones. So it's Rel8, Genuvel Rel, then CentOS, it's finally supported Oracle Linux, which is another rel clone by Priesta University. It should also work because it's essentially just another clone. It's regarded as a rel system. You can use the same client tools that you use for CentOS, TransFort, Oracle Linux. All is the same because it's binary compatible. And we can also do subscription matching public clouds and even list systems running public clouds, which this was also added this year with our Britsvahos gatherers. We worked a lot in usability previously until mid 2020, reposing, so synchronizing. The packages was really slow. Now it's like 10 times faster. Building projects, really huge projects was also very, very slow. We optimized that part a lot, so what usually, what used to take more than 20 hours now takes half an hour. You can see the kind of decisions we made there. This also something really useful was previously when you upgraded your universe, you need to do a two-step migration, a two-step upgrade essentially. One was updating the software itself, so the RPM packages, the other was upgrading the database. These led to lots of problems because sometimes people forgot to upgrade the database. Sometimes they pushed it back, so they wanted the new software, but could not wait several hours, sometimes in some extreme cases, the database migration may take. Then they had broken functionality. Now it's only one step. You upgrade the software, you upgrade the database. You need to schedule your maintenance window for this, and that's it. We also generate automatically the Bootstrap repositories. What's a Bootstrap repository? That's the minimal repository when you want to manage a system with what you need, you need to install some software, or rather to transfer some software to the client to be managed. But since the system is not managed yet, how do you connect the dots? How do you connect the server with the client? It's not possible. It doesn't have access to the repositories. That's what a Bootstrap repository is. It's a minimal repository which contains the minimum set of packages that we need to transfer to a client to be able to manage it for the first time. And then from there we can go. Once it's registered, we can go on, and the Bootstrap repository is never used. Bootstrap repositories were created manually for years, which again led to problems because people forgot to upgrade, to regenerate them, or even to create them for the first time. Now when we sync the channels, when we mirror the packages, we automatically generate or update the Bootstrap repositories, and that problem is now gone. Another very useful feature is Bootstrap in clients with SSH keys, actually. So in the past it was not only possible with logging in the password, but if you're on the cloud, typically those machines do not come with a login and password. They are only accessible by SSH key. Now we support that. Service pack migration, something that really was a source of problems again, was people were trying the service pack migration. And then after carefully, I'm going to turn my camera on because I just realized it was off for some reason. It turns off when you do screen sharing. Ah, shit. Okay. Makes sense. So have I now stopped sharing my screen? I don't know. It seems like the slides are still up. Okay, okay. Surprising. Okay then. Yeah, so what happened is people tried the service pack migration, configured all the packages that they wanted to be uninstalled, installed, configuration files to be deployed, everything, it worked, and then they needed to redo everything from scratch. And then they forgot some step always happened. So they reported, hey, this doesn't work. It used to work when it breaks and I have touched nothing. That's the magic sentence. I have touched nothing. I did nothing. Yeah, right. So now when the service pack migration succeeds, you can go to the history and say, repeat this. And it will do exactly repeating the successful driver. This is also we have got the save the out of people from mistakes with this. We are also enhanced the support for Debian and the moon to the when we first introduced support for going to it didn't support sign metadata or all the help some of them were missing. Now we have full support for this. And we also introduced a single page application with UI based on react so that the UI is now more responsive than the JSP pages. We have done a lot of work in regards to documentation does now our large deployments guide. There's a public cloud kickstart guide like a five page guide that you that tells you from start to beginning have so essentially from installation to managing your first client and creating your first content lifecycle management project. And they have been huge improvements to all the guys in general. So the administration guide the client of your show guide is now a lot a lot a lot better than it used to be. The reference guide is also improved. There are improvements all across the condition we have almost 800 pages of the condition as of now. And we have this in the formulas. There's this kitchen sink formula this this formula doesn't really do anything, but it shows all the confectionality, all the possibilities of the formula before framework. And it's a good start. If you want to write your own formula. So what are we going to do next where are we heading in the coming months translations. I gave a talk about translations yesterday with you need. We're essentially using web late provided by open Susie. Since we are associated with open Susie anyway, and they will be introduced in the coming. Yeah, and I don't know if it will be in the next release or at the end of this month, or maybe in the next month, but they expect that Japanese should be the first translation that we ship. This actually a Japanese was committed contributed by one of our committee members. In just six weeks around six weeks he translated the whole, we you need which is more than 100,000 words to Japanese. That's impressive. I have to say, we will add support for something which is a specific which is retracted patches. Let me start with what retracted patches are not, it will not uninstall anything that you have installed on your systems. Okay, retracted patches is when a subject product has released a patch, which has side effect but side effects. There's some metadata that Susie can add to the to the meta to the channel metadata saying retract this patch so do not install it anymore, but it will not uninstall it. Because at the unique committee hours, six weeks ago, we had this problem that some people did not really understand that I was catching by surprise because I didn't think this could be misinterpreted but that's why I insist on this. We are going to add SAP content. So permission exporters from SAP Graphana dashboards for SAP kick starts the quickest kind of guide for SAP. So this will make managing is less for SAP much easier with the unit. And we have also introduced themes in the web UI. It always started with the unit theme then we have a dark theme, a light theme, and probably more things you can create your own themes it's just a matter of forking and editing a few CSS files and providing colors, maybe some pictures you want. It's really really easy to create your own theme. If you're interested, get in touch to the mailing list or get a chat, and we can help you. We will also document this by the way it's not documented yet because it's not introduced. Actually, I think it was merged yesterday but yeah, it's not released. Redfish is a new if you know IPMI or if you have used IPMI you know that it works to boot machines and can do a lot of things, but it's not exactly the most convenient thing to do. Redfish is the new generation of IPMI. Essentially, you can think of it as HTTP based IPMI. It's now also implemented and merged. It will be released soon, probably in the year 2020.10 event. And we want also to include Debian and Wuntu Erata information. That's something that is missing currently. Now, Debian and Wuntu do not really have the concept of Erata as Red Hat or SUSE operating systems have, but from the information, from the updates to Debian and Wuntu, in the same release we can fake that and provide that information. In the end, it's just saying you have this patch or this update to existing packages for your own release and they fix this CVE or this bug or this security problem. And we will keep enhancing the hub. As I said, we are going to provide also the next thing we are going to work is the interservering. So currently, when you have several, with Unis servers, all managed by the Unihub, you will download the packages, the RPA packages in each of the servers. We want to avoid that to save your traffic. And then the interservering will take care of that, synchronizing the packages and also the configuration files and everything so that you can do that only so mirror ones or have the files once on the hub and then they propagate to all the other servers. We are working more on mutualizations and this is from the SUSE side, of course, there's more community contributions and I will mention one at the end. Maintenance windows, currently they work with iCal files that you can generate with Outlook or ServiceNow or any other calendaring tool, KOrganizer for instance, or Evolution. But you don't really see them, there's not a calendar view in Unihub, we want to add that because that makes it a lot easier to, when you see it visually, it helps you. Cluster management, currently we can manage CASP clusters, we want to add more cluster types, so it could be just standard Kubernetes distributions or non SUSE, or different kinds of clusters, there are several ideas for this. Because clusters doesn't really mean that it needs to be a cluster, a cluster for you, maybe all my Apache servers or a group of, if I am, I don't know, a WordPress hosting service, for me a cluster can be a database server plus two Apaches plus two storage servers. I can make my own cluster out of that, I can call that a cluster, a WordPress cluster. So writing clusters, for cluster, write-to-cluster, imagine plugins is relatively easy and we want to add more and of course expect the community to contribute with different cluster types. We want to keep making Unihub easier to use, we have some ideas in regards to usability, for instance, the system list page or the product space are something that can be improved. And of course we continue building the community. So if you were part of the Unihub community, a year ago, you know, it was relatively small, now it has grown, you can see there's a lot of activity in Gitter, the users helping each other. There's of course people from Susander, but I'm very happy when I see one user helping another user and we are just listening there or in the community hours when people start proposing things and presenting their own stuff like ansible playbooks that were presented to install Unihub. Now say you want to contribute to Unihub because you are excited about this, as I am. There's lots of ways, different ways of contributing. You can of course contribute with ideas and feedback, so we are available through the mailing list, Gitter and GitHub issues. You can continue with code if you want to set up your development environment. This wiki page explains how to do that step by step, even configure your IDE and just hack and submit a pull request, if in doubt just contact us first and we will help you. And another way of contributing is with translations. So in this case you don't even need to write any code or clone a Git repository or anything, just go to this web blade project, Unihub, and you will find the components. This wiki page explains all or everything you will find there and if you don't want to set up, if you want to see the output of what you are writing and you don't want to set up your tool chain, your local system would provide a mutual machine for the recommendation tool chain, which is a bit more complex to set up. There are tons of opportunities for the community, so here are some ideas, I'm sure that you will have more. Of course there's translations, this is an easy one because it requires no coding skills or these articles or videos with learning pills about the Unihub. Oops, there's creating formulas with forms for solved formulas, which are, there's a ton of formulas available from GitHub, just a matter of adding a form or these deviant Wuntu, a rat information and so there's already two community efforts to this. I have not tried any of them yet and I know that some people reported, or some people report it doesn't, it will be good if someone could take this and say, okay, this is what's missing or this is how to use it and document it. Auto-installation, this is something that we are totally missing for deviant based operating systems, so even though Unisupports kickstart, the kind of kickstart that Wuntu supports is not the same, it's different parts, so this could be an easy start and ideally we should support Presid, which is the official deviant auto-installation way and Wuntu auto-station way. There's completing the Amazon Linux 2 support requires dealing with the metadata, because Amazon Linux 2 uses SQLite metadata versus the XML metadata that every other RPM operating system supports. Or write the Mirthal-Hosgathers for your favorite cloud or hypervisor essentially. Mirthal-Hosgathers is a plugin which is really small, like less than two hand lines of code really, that connects to a hypervisor, a hyperscaler, and lists all the systems that are available there and brings them to SUSE manager. There's some more crazy ideas like VDIs with doing VDI with Unis, even with our limited VTAS support, this is completely possible. If you think about it, I have something written that they will probably publish in the wiki soon. More advanced stuff, containers, matching health charts, integrating containers plus packages in containerless segment, because sometimes some products require that, that you install packets and then you install containers combined. Or using hardware, this is the type of here, hardware for staging the containers, we don't really need to implement container staging in Uni. Enhancing virtualization, network configuration, or snapshots, PixiBoot is in the works, there's a lot of more advanced configuration like CPU, PINNING, lots of stuff. Windows, Mac, Android, more clients, yes. Or an integrated editor, maybe based on Eclipse Thea or Microsoft Monaco, which is the same thing that is used for Visual Studio Code. Or if you want to create your own dashboard, integrate it in Uni, then having a web framework to that will be also useful, because with the Grafana you can do that, but it doesn't feel as integrated as it could be. We are participating in Hacktoberfest this year, so you can continue with code or documentation or translations and get a t-shirt, you can use one of the ideas that I just explained, or if you have more ideas, you're free to do so. I would recommend to get in touch first to make sure that you are on the right path. Here you can see that we have several GitHub issues labeled with Hacktoberfest, and this is the Hacktoberfest initiative page where you can find more information about Hacktoberfest. Questions, I'm going to start with an answer, by the way, because this is asked a lot. Is Uni available for CentOS, or REL, or Oracle Linux, or Debian? No. It's not yet, but there's community effort in that direction. There are two guys working on making Uni available on CentOS. That effort is rather advanced. I don't know when it will come, and we will, of course, accept it. One day, man, one day. And the other question that I get asked a lot is if we can manage Windows clients. Not yet. It's my pet project. It's not that difficult if you're using the salt stack to manage the clients. I can even mirror the updates from Microsoft that they can deploy them to the clients. It kind of works, but it doesn't, because it's not visible in the web UI. It works from the command line, but not from the web UI. If you're interested, get in touch, and we will make more advancements here. And that essentially it. So now it's time for your questions, instead of my answers. Oh, so that's why it was A&Q. I was wondering. So a couple of questions for me, at least. The first is, why are you going to do this retracted patches stuff? Because the problem with retracted patches is not that, oh, it doesn't affect existing installed systems. It makes configuration management and installation and mirroring completely non-deterministic. Because if Suzu pushes a thing that says, hey, this patch shouldn't be synced or downloaded or installed anymore, that essentially breaks the consistency that most people use a Unifor. That's the reason why the community was like, this makes no sense to actually have it. No, no, no, no. You can still explicitly install it. Yeah, but it won't happen automatically when some machines have it and others don't. That's a problem. No, no, no. That will not happen. So once your patch is in, it doesn't get removed from your Contella Psycho Management project. It just happens that if patches released today and you only create this Contella Psycho Management project in two weeks, because your maintenance window is one-to-month, which is a typical maintenance window, then you will see that by default, the retracted patches are, and it's not even that. You need to add this filter. So the way we are implementing this is you need to add a filter saying, do not add retracted patches. Okay, so it's nothing undeterministic or nothing unexpected will happen. The only thing that will happen is that if you don't want retracted patches, you will have this possibility. Okay, so it's not active by default and nobody has to use it? No, no, no. So it's going to work in the way that you want. Of course, if you still want to install a retracted patch because the side effect, the bad side effect, why it was retracted doesn't affect you or you can live with it, you can still do that. But then don't cry. It destroys your systems. Well, look, I'd rather have five destroyed systems that died the same way that rather than two destroyed systems that died one way and three that died a different way. That's worse. So that was my objection to it, at least the last community meeting. The other question I have is with the with the new seeming stuff, does this mean that we can do things like have fixed terminology for some of the more idiosyncratic phrasing that's used in uni that we inherit from summa, like using patches for referring to updates, and I'm not going to go back to using the word updates, because it's really confusing and and doesn't make sense to call patches update call updates patches. Yeah, so themes. Yeah, so that's a whole different discussion. I'm not going to enter in that that's a discussion for the, let's say the zipper people or the I'm talking about. But now but themes are a visual theme so like colors pictures at font that kind of themes translations. Yes, you could create your own meal language, for instance, and that replace or all the patches every every occurrence of the of the word patch with something else. Yeah, I'm not going to do that. That's crazy. No, I was just hoping that the word that particular word choice could go away in some way on for for uni because it outside of Suza corporate and and direct customers that's not a term that's used for package updates, like ever. And so I know where the heritage comes from I know why it's called patches and I know that this is just one of those weird in use idiosyncratic things that shouldn't exist but it is just be nice if it didn't have to be propagated into uni to confuse everyone else. Yeah, well, themes are a different thing. So the, yeah, you could create your own translation if one. But the discussing the default term in English is a different discussion. We use the term that Suzy uses. And when Suzy changes the term the official term for patches then we will also change it. But so far. Yeah, but she's our patches even so people called them updates and I have to say that coming being a former Suzy former deviant developer. I was also surprised by what what is a patch because in the deviant world there's no patches. Everything is an update. And that world and the Ubuntu world. Also, another thing I just wanted to point out as a comment. You cannot use pre seed to to automate installations of Ubuntu anymore. That's not a thing. Oh really they have removed that in 2004. I read about that. But they still kept it around. Nope. The devian installer is no longer used for Ubuntu at all as of 2004. So you're out of luck in terms of, you know, automation of mass installation. Wow, then this is going to get fun because it's preceded for the beyond something else for good to and then we have kickstart for real close and out of the ad. So it's less. The current recommendation for Ubuntu is to take one of their cloud images and use cloud in it. Cloud in it. Yeah. Yep. Well they invented cloud in it so of course. Yeah, I can understand that. So the thing is that we use cloud in it in summa form which is the tool that we use for developers for QA. Maybe adding cloud in it is not even that. It's certainly useful as a cross distro fast install thing because I believe one of there is a similar project out there that can do bare metal installation super quickly by abusing cloud images and doing weird things and cloud in it. And then we have normal installations. And I'm showing here how you are navigating a bit of we unit because the presentation path to you can see this is the formulas with forms you can see that this is what this original YAML file but super easy to use the renders to something that's there or if I want to deploy Prometheus I want to deploy Prometheus exporters that just need to click this go to Prometheus exporters and say hey and I can even label the rest proxy if I want to deploy several node exporter, apache exporter, postgres. I'm going to deploy here these and then save the formula and then I need to apply the highest state which I can schedule at any time. I can even add to a new action chain to perform this together with other stuff and it will just happen. That's it. And it's at the top of the hour. Thank you very much for attending if you're interested join our unit community hours Friday in two weeks or 4pm European time there's more information in the union now so you need to develop and the unit users mailing list. If you want to translate the unit join the unit translation a mailing list we are also available on Gitter and you can also find me on my email. Thank you.
Configuration management, content management, patch management, compliance, building images & containers, virtualization... you name it! Uyuni is a software-defined infrastructure and configuration management solution. It bootstraps physical servers, creates VMs for virtualization and cloud, deploys and updates packages -even with content lifecycle management features-, builds container images, and tracks what runs on your Kubernetes clusters. All using Salt under the hood! Uyuni provides you a high-class frontend solution to interact with Salt, manage your states, formulas with forms, and much more using a web UI. Or you could use our APIs. All the major Linux distributions are supported: SUSE Linux Enterprise Server, Red Hat Enterprise Linux, Debian, Ubuntu, openSUSE, CentOS, Oracle Linux and we have reports of Fedora. Uyuni is open source, backed by SUSE and actively developed. This presentation will give you an overview about Uyuni: where we are, what's next and how the community could help (hint: some features are not -yet- supported on some Linux distributions). A demo of Uyuni with several different client operating systems can be included if time allows.
10.5446/54663 (DOI)
at least heard about the open build service, but in case you haven't, so the open build service is the heart for creating the open-sousa and the SLE distributions. It's a fully open-source build server and it can be used to create all kinds of different artifacts. So I think it's mostly known for being used to build the open-sousa distributions and that is used to build RPM packages, but it can do so much more. So it can build Debian packages, it can build packages for our clinics, it can build app images, it can build virtual machines of all kinds, it can build vagrant boxes, it can build containers and all of that more or less using the same workflow. Then one of the really nice things about the open build service is the automated rebuilds. So it is a build server, you define your build recipe for whatever you want to build, be it a RPM, be it an app image or virtual machine and in case one of your dependencies changes, your package or your binary gets rebuilt. And multiple platforms and Guillaume just said it works on ARM64 as well, which is pretty nice. So one of the selling points of Visual Studio Code at least for this project was it's very, very popular. So Visual Studio Code, I think it popped out, it became a thing around 2015-ish. And according to the Stack Overflow Developer Survey, it had something like a, it had already in 2016, a market share of something like 7%, then next year it was 24, then 30 and last year it breached the 50% mark. So in less than half a decade, more than half of all developers are using Visual Studio Code overall, which is impressive to say the least. And one of the also nice things of Visual Studio Code is that it has a very well documented extension API. So in case you are already using it and you are itching to, well, there's something missing and you want to extend it. This is actually not too hard to pull off. So if it's feasible to be done in the VS Code, then it's pretty doable. The API is quite well documented. There's tutorials, there's example extensions, and there's also a relatively active community around that. So in case you're interested in something like that, go for it. Now I've also talked about this a little. So why Visual Studio Code? Why did we choose this and not start with Emacs, which I, as a passionate Emacs user, would have of course loved and preferred, but we also have to be a little pragmatic here and Visual Studio Code is extremely popular. So by targeting this one first, we get, we essentially reach 50% of all developer to achieve a similar reach. We'd have to implement this extension for, well, for every other editor and we still wouldn't get the same market share. So starting with Visual Studio Code makes sense. It also makes sense from another point of view. And for this, I have to maybe go on a small vTour. So what makes Visual Studio Code really popular, at least in my opinion, is that the initial user experience is really outstandingly good. If you open it up, it's easy to understand. It immediately clicks and you can get started really fast. And if there's something missing, it will tell you what you can do. So for instance, if you open an RPM spec file for the first time, it will tell you, hey, you have no syntax highlighting, do you want to install this extension? If you do the same thing in Emacs or VI, yeah, it will do nothing like that. So that's, and the advantage of this is, if we create an extension for Visual Studio Code, while it might not be your preferred editor, Visual Studio Code is simple enough to grog so that you can still use it and be decently productive, even if it might not be your thing or your absolutely preferred thing. And this ties directly into the consistent UI and UX that Visual Studio Code provides. So this is one of the, it's one of its advantages and also one of its weaknesses. So you can't do anything really super fancy with Visual Studio Code, in case you are used to these really powerful extensions that you get, for instance, for your Emacs or for VI, you won't get those really in VS Code, because the API is limited on purpose. And by, that might sound like a bad thing, but that's done so that you have, that as a developer, you are forced to do this consistency. And last, but not least, is the language server protocol, which is built into Visual Studio Code. So this is also one of the inventions of the new Microsoft. It's a communication protocol that can be used to give your editor, so let's call it CodeSmart, so stuff like auto completion, what you see here. It's essentially a communication protocol between an editor and the language server. And the language server is something that is an external program that analyzes your source code. And the editor can query the language server and ask it, hey, is this source code correct or what auto completion should I provide here and there. And the idea behind this is essentially you as an author of a language, of a new programming language, you just write this one language server. And it works, and you don't have to implement a plugin for every single text editor. And eventually we'd like to leverage this as well for stuff like RPM spec files or other build recipes. But unfortunately, didn't get to this part yet. Now, as every journey, there are challenges. And this one has been partially simple, but it has been also a bit rocky from time to time. So let's take a look at what the individual challenges were that we faced. And the first one is, well, was on the Visual Studio code side in terms of the UI. So what you can see here is just another screenshot of Visual Studio code. And the issue is here, you can't really change a lot of this. So if you want to, if you, you have your Visual Studio code windows, we've got your editor with your tabs, your terminals, and your side view. And if you really want to do something that does more than adding a button, you are limited to this part. You can display output in the console, and you can add certain types of overlays to the text editor, and you can add buttons, for instance, here and there. But that's about, that's about it. If you really want to add additional data, you have to add them in this sidebar. And so, for instance, you can create a new, one of these new sidebar views, and then you can create a preview here that looks something like this. So this view shows you, this is the, this is the Explorer, which is just your, essentially your file manager. Now, this, this image is also a little bit misleading because it shows, because you might say, now, okay, well, this is not too bad. I mean, you have, you got all these, all these types of elements here, and they have different colors, and you got these icons in here. And so, and so that's, that's not too bad. Well, actually, you can't influence this. So you can tell Visual Studio code, hey, I'd like maybe an icon in front of this, and display this text, but color, you can't, you can't change the color of elements, you can't change the text style, you can't change the font. And as I said, this is, this is done on, this is really done on purpose, so that the thing looks the same everywhere. But it's also kind of limiting. So I have, I initially wanted to create, to create a sidebar that will show you all your requests that are open against your packages, and requests that have been declined to strike them out or to show them in red. Unfortunately, that's not possible. It's also not really easily doable to, to, for instance, display a graph in the, in the main window, you can do that via so called that view views, but that's relatively complicated. So the UI is in this sense, unfortunately, rather limiting. So we had to work with what we have. And so, so far, it went okay. But here and there, I would have liked to have a little bit more customizability. Another thing is the user experience. So this is mostly, we have roughly two target audiences, I would say. And I'm calling one of them the expert and the other one a beginner, which essentially the, this is from the point of view of packaging. The idea is the expert is, is your distribution packages. So someone who has, who maintains few dozen, maybe a few, even a few hundred packages or someone who reviews a ton of packages. And the beginner is someone who's more of a beginner to packaging. So may, so probably someone who just, who's more of a, more developer and just wants to build their project and the open build service. And now for the expert, the expert needs, needs a good overview of over a whole bunch of stuff. The expert wants to see all kinds of all their projects, all their packages, their requests, and they want to access all this information relatively quickly. So preferably via keyboard shortcuts and all that, you need access to the version control and all that in a hope, in a relatively streamlined experience. The beginner probably doesn't, doesn't care about this being all very efficient and very fast, but the beginner just wants, yeah, just build my project and don't get in my way. And preferably they want something that's more simple and that should be also guided. And this is, this is a little bit of a challenge since we have, we have to bridge the, bridge these two and find something that's, that doesn't overload the beginner, but is still useful. And this has been relatively hard, mostly because, mostly because it's been a lot of the ideas were done by me and I'm not a user experience expert. So in case any one of you gives this a try and finds the user experience terrible, open an issue on GitHub, please. I definitely like to hear some feedback about that. Now, so VS Code was one part of this. The other part is the open build service. And that has been also quite challenging in some regards. So the, the open build service has a really extensive API in case you're, you are a package and you're using the command line OSC client, then the OSC client is communicating with, with the open build service via its API. So all that you can see on all that you can see what OSC does, that's all done via the API. The web UI of the open build service, that one, I think that one doesn't use the API, at least not, not completely. So the web UI actually can do stuff that you can't do as efficiently with the, with the API. And yeah, big problem in my opinion with the API is the documentation is lacking in a few regards. So there's, there's some, some parts are not, there's just information that's not been updated, that's missing, or that's just not super well explained. So that could, that could use some help. But on the other hand, what's really nice about the documentation while it use about the API, while it uses XML, and that might be off putting to some people, the schema. So every, every API route has a defined schema. And so you can pretty much rely on getting, getting certain stuff back, which is very good. And that ties, that ties itself very well into TypeScript, because I can just define a certain type that I'll get back from, from the open build service. And I, I get that, and I can just convert that into object in TypeScript. That's, that's actually really nice. But as I said, parts could be the documentation could, it could especially use tutorials, how to use it, since that part is not easy to, to extrapolate just from a documentation of the routes. Another thing is this is rather minor, but I think the API could use some, some type of versioning or some type of deprecation since this, there's few routes that are either not, not functional, or that simply are discouraged from being used, or that could be, could be maybe improved at some point. And this is currently not easily possible, since or in, in terms of changing how routes behave. That's really not possible at all, since if you would just change how a certain route API route behaves, you would just, just break everything. And that's far from ideal. And so what's, what do you see, what you see in the, in the wild is essentially that some, some API providers have a slash V1 API. And at some point they, they just start a slash V2 API and, and then start event and eventually deprecate the V1 and get rid of it. But yeah, so what's, what I find very challenging on OBS is the handling of the history. And if you are, if you have worked with OSC and you've, you've branched a package. And so you, you might already know that you branch a package with OSC. And then you'd take a look at the log in the branch package and history is just one comment there. And so the, the history handling with OBS is kind of weird because initially OBS started as a built, started as a built server and not really something which is versioned and that got, that got built on top of that. So you have to take into consideration that OBS is, it's not super old, but when it became a thing, Git already was a thing, but it was not the only thing. So that's also why the OSC command line client is not modeled after Git because back in the day when it was conceived, Git wasn't the most popular version control system. It was SVN. And so that's why OSC is modeled after SVN rather than after Git. That doesn't mean anything about the backend since the, the, how the history is handled on the build service is kind of independent of that. But history handling is kind of weird and that ties into anything that involves linking of packages. So this part is what I've been really struggling with since once you start linking packages. So as an explanation, linking packages means essentially if you do an OSC branch of something, you create a so-called package link. And that means you have your original package, let's say GCC and you say OSC branch GCC. And now what OBS does it creates a link in your home project to GCC. And, but the, the thing is, this is not a branch like you'd think of a Git branch, but your changes that you make in your home project, they are applied on top of the revision from which you branched, but also taking into consideration the current head of GCC. So in case your, in case the package GCC gets updated from which you branched, your current state also changes. And that's, it's really suboptimal from a version control point of view, because your history is not, not really static. And stuff can and, and past revisions that worked suddenly don't work anymore at some point, which it, this makes sense from a build server point of view. But it's kind of annoying from a version control point of view. And last, I found OBS to be sometimes kind of slow. So I'm, I'll come to that, but for, for testing, I'm running the OBS development environment. And if, if my machine is under load, then OBS can sometimes take quite some time to process requests and then tests start to fail left, right and center because timeouts are hit, which is a little bit, a little bit annoying. And also you can unfortunately sometimes, yeah, run, run a simple denial of service against OBS by just starting a whole bunch of, a whole bunch of requests. And if you do that with a high enough volume, you can essentially kill the whole server, which is unfortunate. But so, well, and as I, as I noted, testing. Oh boy. Yeah, this, this one is, this one is, to be honest, really the besides, besides anything involving links on OBS, testing has been really, really challenging because we've been creating a user interface and testing user interfaces. That's really, really nasty. So you essentially are testing a GUI. And if you've, if you've ever tried doing something like that, it's, it's, it usually involves a lot of hacks and often doesn't work. So that's also why many, many big GUI applications don't have a lot of tests for the user interface, because it's just really hard to pull off. And so if you are into software development, and then there's essentially two big approaches to testing. One is unit testing that you use to test small components of your code, and then there's integration testing, which tests the whole thing. And so with, with unit testing, this is, this is in the context of a GUI, that's really not that easy to do. Because you have your, for certain parts, it's simple, but for the part that displays the UI, this is, this is a bit tricky, because you have your GUI toolkit, and that creates some, some kind of initial state and feeds your, feeds your program or your functions with data, and your functions produce some, some output, but then the GUI toolkit renders. And to successfully test that, you have to create this initial state yourself. You have to then usually also, also create so-called mocks of certain, of certain functions that, that call to the GUI toolkit. And then you have to verify that this actually, that the result that you produce actually creates the is correct, which means you have to, you have to yourself check, okay, do I create the correct visual output? Or is the result that I'm, that I make, is that going to result in the correct visuals? Which is, so, so that unfortunately tends to break if the GUI toolkit, in this case, VS code gets updated. So in this case, integration testing is, is a bit better to do. Since we essentially want to test that the extension does the right thing, and testing the whole, the whole one in itself is, prevents all these kind of brittlenesses of testing it yourself. But it's still, it's still relatively tricky since it connects to the, so this, this extension is bridges visual studio code and the open build service. So you need an instance of the open build service. You don't want to run this against production maybe since software has bugs and you don't want to accidentally delete important packages or do other nasty stuff, which can happen. So for that, fortunately, the OBS team has a development environment, which is just a bunch of Docker containers. And we use that, which is that's that's actually that was relatively straightforward ish. Another bit tricky part is also the extension needs to handle secrets. So it needs to you need to tell it your OBS password, so that it can access the API. And this secret needs to be stored somewhere. So this was on Linux, you use lip secret from for that for for from the GNOME project. And to not mess with your locally installed lip secret, we've actually, we have a tiny, a tiny C library that just is injected into the test environment via LD preload, just so you don't just so you don't mess up your local secret storage. And last but not least, the actual tests are run using an extension that's called VS code extension tester. So if you if you remember a few slides back, I said that VS code is built on electron. And so this thing is, it's actually just a website, as even a web version of VS code that runs in your browser, if that's a thing you'd like. And so that means you can use all those all those testing frameworks that exist there for website, for instance, Selenium web driver. And so someone from Red Hat wrote a wrapper around Selenium web driver. This is VS code extension tester. And it allows you to interact via via an API with VS code itself. So if you know OpenQA, which some of you in here might, it does essentially something comparable. So you tell it, hey, find this element, click on it, tell me which new views pop up and so on. And then you can check what's currently which editor windows are windows are all open there and so on. So this one, this one is really useful. I have a few tests set up with that, unfortunately, not as many as I'd like yet. But this is in case you're interested in developing your your own extension, this is a thing you should definitely take a look at. Well, and with that, I would go for the for the live demo. Let me share the other window. And in case there are any questions or anything specifically you'd like to see, please speak up. Is this roughly readable? Is that good from a view? Okay. Okay, so what you see here is is essentially the what you'd get if you open the extension right now. So it displays the thing, the thing itself is called the open build service connector. So if you want to give it a try, you can you can just install it from the from the Visual Studio Code marketplace. Just search for the open build service connector. There should be an updated version from a few hours ago. This is what I'm currently running in here is the is my development version. So it might look it shouldn't actually look tiny bit differently. But I expect that probably because I'm presenting this live, something will break. And so then I can at least attach a debugger. And just be a little bit embarrassed. And that's it. So if you if you open the extension, you can just just activate it by clicking on the on the nice on this custom OBS icon that Stasiak created for me kindly. And if you open the extension and you got a and you got a OSC configuration file already on your file system, and the extension will will prompt you for whether you want to import your accounts. And so I will just trigger that manually now using a command. And in in your case, it would also ask you for your for your passwords. If you've if you've never used it before, so it will store these in the operating system, operating systems key ring. So let the essentially the main interaction point is the are the project bookmarks. So the idea is that you that you'd add you'd add bookmarks for each for all the projects that you care about, and that you want to interact with. So in this case, you can see on the bookmarks it added two of these two of these server icons here. So one is for the open build service and one is for our internal one. And so I'm just going to quickly remove the internal one since we don't we don't need to at the moment. And what you just start out with is to create a to just bookmark a new project. And if you click this, this button, you can you get an overview of every single project on on OBS. So there's there's quite many. So let's just let's just pick one. Let's pick the utilities project. And then you can decide which packages of this you want to you want to bookmark you can just take all of them. But I'm just going to pick one. And we'll show up on these bookmarks. And you can take a look at the files in here. Can take a look at the spec file. Opening the tar file doesn't really make sense in this case. And I think the usual studio code barfs on that. So this this here is a read only view. So if I try to hammer my keyboard, this is a read only view because this is the file is just pulled down from from OBS and just displayed like that. Now, you might have noticed that if I select the if I select this, one of the files from the JTC package, that suddenly these two views here are populated. So what these so what this one, this one shows you your current project that that belongs to the open to the open file. So if I would simply add another one, let me just bookmark a few other packages. I don't want to really I want. So, okay. So if we open one of the patches in here, you'll see that it changes that it changes the current project changes to whatever belongs to the currently opened to the currently opened file. And again, this is just this this patch is from is pulled directly from the open build service. Good. So and then you also have this the view of the repositories. Now, this is this is a little bit of the simplified view that you have of your repository. So you have here every every repo that is configured for this for this package. So in this case, we got for leap and for the sleaze. And you can take a look for which which paths are defined here. And for which which architectures and as you can see, we can also also modify these. But I don't want to mess with the utilities repo. So I'm actually going to to branch the utilities package and hope that it will finish in time. So unfortunately, connection has been today rather slow. So it might take a while. Yep, the guards of the presentation are not favorable today. Yep, please if you want to say something, just go ahead. If you want to say something, then please say it and someone just unmuted themselves. I will say something or mute yourself again, please. Whoever just joined, please mute themselves. Hey, could you could you please mute yourself? You didn't fill in your name, but oh, good Lord, people. Do you know if I can somehow force mute someone? I'm afraid I'm not the admin of this place. So last week, not because I don't think that's actually a feature in jitsi. Oh, well, well, I mean, it's, I'm just going to locally mute the person who's currently making a whole bunch of noise. So if they say something, I won't be able to hear them. Okay, anyway, so fortunately branching finished. And it asked me also to check it out locally. So this is now, if you go if you go to build.opensuzer.org and check out this project, there should be now a new one and you can in theory live see how I messed up. What do you mean you can't see the presentation? Do you this? I should be sharing a VS Code window right now. Okay, so probably, yeah, it could be a jitsi connection. So jitsi relies on your jitsi has sometimes issues with if you're in an unfavorable geographical location, unfortunately. Like me, I can barely see everything because it looks like blurry cam. Yay, I know that. So, okay, so I've got this, I've got the package branched, it's in my home project. So that means I can absolutely mess it up as much as I want. And I'm going to do that to show you what you can do with repositories. So essentially, we got repos for everything and we can just add new ones in case. So we want to add, I don't know, mega seven. So and let's also add IBM power KVM 3.1, whatever that is. And as I said, cool. Immediately, I break stuff. So let's just add this one. I think that should, sorry, not adding any repositories today. I'll take a look at this later. So at least this part should, yeah, demo effect. And apparently, I don't have tests for this. I, what I do have tests for is, at least I think so, is if you can simply add architectures. So if you want to have a, if you just want to add stuff in here, then, okay, I'm sorry, there's something seriously broken here. Give me a second. What is going on? Why? Is the API side on OBS down? I hope not. I mean, the web UI still works. Well, it is Thursday. And this is around the time that they go and deploy. I get an API reply from, from OBS itself. Let me just restart it. Screen is still shared. Okay, now it, now it works. Well, have you tried turning it off and on again? Anyway, so you can, you can simply click on the plus button there and just add whatever architectures you like. So this, this all, so it will, it will always do that for the, for the project that belongs to the currently opened file. And you can also, in the same way, just delete stuff. As you can see the, so the delay that you see here, that's, that simply it updates after OBS has also updated that. So, and also in a similar fashion, you can add new paths to your repository. So this will just open this search for, for a new, for a new project. So I can just start typing in and select some, let's just add some leap 15.2 use standard. And it will some point pop up here. As you can see now, there's now there's also these arrows that showed up here. So you can, you can move the paths up and down since this actually makes a difference depending on so the, the order in which the, the paths appear in your repository that actually makes a difference. So that's why you can, you can change them via these buttons in here. And we should also be able to, let's also now try just to add a new repository from some other distributions. Yeah. And now that, that part works as well. So as you can see, I've added open source of factories at systems. And this is, this, this allows you to add new repositories from, from the predefined ones that exist on the, that are defined by the open build service. And if you don't, if, if you don't like it, you can just, just remove it and it will eventually disappear. So also other ones can just start fragging them and then they're gone. So, okay, so this is, this is all nice. This is all, all server side. And what, what we now can do as well is, is to check out, to check out this package. And there's a button and I, okay, so this is what you currently can't see is, unfortunately, because I'm just sharing the VS code window. So, okay, there's going to be a downtime in a few minutes. I just seen the chat and yeah, we're then also close to the one hour mark. So what you, unfortunately, currently can't see is there's a file picker popped up. That allows me to specify a directory where I, where I want to check it out into. So I've just selected one. And it checks it out and asks me if I want to open it. So I'm just going to click yes in this case, we'll open it. And now what I have here is this all still looks the same. But this is actually a local file. So if I do this, it's, it's actually, I can actually modify this. And this is, so as you can, so this is actually a file on my on slash TMP. So let's do something in here. There's something really, really simple. And as you can see, this, so we've, we've integrated this, we've integrated the build service into the version control. So you can, you can now use this fringe indicator, you can see your diff in here. Save this can do, do the stuff like reverting the changes. Works also with additions. And this also shows up in here in the source control. So if you're familiar with Visual Studio Code, you'll see the source control in here that's, so all your, all your files will show up in the source control. One thing that's a bit unfortunate is with the, with the source, so with the source control, I think the idea behind Visual Studio Code is that you only see really the modified files in this view, because now it says there's three pending changes. But we've kind of opted to also display all your existing files, since most packages actually don't have that many files. And what I've at least in, for my use case, I frequently want to delete files. And here with, in this view, I can simply click the remove this file button, for instance, yeah, boop. And now it's also handled by the source control. So I can also simply, simply revert it and it's, it comes back. So this should be, this should be more or less integrated with the, with the, with the VS code source control as much as roughly as you like it. And yeah, so what you can, what you can also do is you can actually build your package. So in case you want to, so Visual Studio Code has these so called tasks. So if you open the commands and say run task, you'll see it will give you a few tasks that get contributed. And in this case, you want to look for the OSC task. And you'll get a selection. So essentially, you can build, you can run OSC built for every repo and every architecture combination that's out there. So I'd say open sucer factory x86 say, say just without scanning for, for the output. And then it will run OSC built inside here. Ask me for my root password. And now it will build it. It should take a bit. And so in the meantime, we can simply do a so I've now made a really super pointless change, but just to be able to do, to do something. So you can now see we got the, the spec file showed up as changed. And if I go into the source control and click here on, on the file, it will open a diff view. And you can see now my, my build actually finished in the meantime. So this should still build. Now I'll say, okay, cool. So let's commit the changes. And what I can do is, so there's two options in here. One is add an entry to the change log. So what this does is it writes the dot changes file, which is you, which is customary to do on the, on the open sucer side of things. And as you can see, the changes file now shows up as a, so there's a new entry from just now. And you can see there's now those two changed files. And now we can just commit this boom. And there's also a rather hidden view in here that shows you the comments of this files of this, of this package. So few, if you click on them, it shows you some, it's not super useful yet, but it shows you essentially the revisions that were made, who made them when they were made the MD5 sum and the comment message. So this is, this is roughly what you'd have if you run OSC log. So if I open the terminal here, I can, so this is also a, this is a valid OSC package. So OSC should still, should still be able to work with this. At least I try to test, test for that as well. And if we take a look at OSC log, and you can see the view is essentially comparable to this one. Good. And I think the last thing that I'd like to show you is now we got, now we got the, our, our tiny change. And if we go back to our bookmarked view in here, I can just update it. And it should eventually show up in here as well. I was hoping it would, but it looks like, I'm sorry, it looks like it's, it's currently taking a while to sync the, to sync the changes back down again. So occasionally, occasionally it takes a while. But what we can also do is to submit this package back again. So if you click, if you do a right click on this, oh, I see that's also not visible. Screen sharing. So if you do a right click on a package, there's two options. One is branch and the other one is submit. And if I go to submit, if I just disable my webcam, you'll see it created a new request. So, and this is a clickable link. So if I click that, it will, it will, I can just open that in my, in my web browser. And in this case, I then reject this, reject this request because it's well garbage, but that's besides, so that's just for demonstration purposes. And unless I have forgotten something, that should be roughly what the extension can currently do. So I hope this is this gave you a rough overview. Since I'm already nearly 10 minutes over time, I don't think I should be showing you how to develop stuff, but I'm very much open to any, any questions, any suggestions, ideas. So if you want to give this, give this thing a try, go to the Visual Studio Code marketplace, search for the open build service connector. Let me just, so, okay, so someone is asking whether you can get this through openvsex.org. I don't know, I have never heard of that. So I don't know. I'll have to, I'll take a look at that if it's, if it's possible to get it from there. Just, yeah, doesn't look to be available there. If it's, if it's easily possible to submit it, I'll, I'll submit it there as well. So, okay, so as I said, you can search for this on the open build service connector. There's also links to GitHub. So you can find the actual code itself is under the SUSE organization, open minus build minus service minus connector. And there's also, so that's for the actual extension part. And for the front, for the backend library, so it's kind of split in split two ways. We have the, we have the backend library and which is communicates with the open build service API. So in case you want to have a, have an API wrapper for the open build service API that's written in TypeScript, that's called open build service API, all separated with, with dashes or minuses. So since there's no one appears to be wanting to ask anything, so. You did good. Thanks. I guess, well, I do, I do kind of have a question. Fool, it's not kind of, it is a question. When you were working on it, what was the most frustrating aspect and what was the most interesting aspect? Well, the most frustrating aspect was, to be frankly honest, was sometimes OBS is just dense. So that was, that was relatively frustrating and also getting the, and also getting the tests to actually work was, so it was for instance, extremely frustrating to get some, to get simple stuff like code coverage out of, out of the unit tests, the, that, that has been also extremely frustrating to pull off and the, well, and how did you formulate it? The other thing rewarding or something like that? Rewarding works, or interesting or the most fun, like, what was the most positive part of the experience? So I'd say is every time, every time something new, I managed to implement something new. That was, so for instance, getting the, getting the version control in there, that was, that was really rewarding since that part isn't too well documented in the API. And getting that work was, was pretty rewarding since it was eventually relatively simple, but getting all the bits and pieces together took a bunch of testing and figuring stuff out. Yes, Gustavo, you want to say something? Please go ahead. And you're muted just in case you're already saying something. No, no, sorry, I just clicked the wrong thing. Just I thank you. It's a great presentation. We'll enjoy that. Glad you liked it. Yeah, I have to say this is probably the most interesting OBS integration I've ever seen. I'm honestly a little surprised that it worked because the OBS version control doesn't exactly map cleanly to what everything expects a version control to do. Yeah, I have to agree with that. But on the other hand, the version control that that VS code expects is it's pretty flexible. So VS code hasn't doesn't really have so the the version control integration that VS code offers, that's actually it's pretty flexible. So you can do a lot of stuff with that. For instance, there's an example extension from Microsoft themselves that integrates the that integrates the built-in version control with JS Fiddle, which also has some rudimentary version control built in. So that that part actually works relatively well. And since since VS code has no real history view, as you'd expected, there's also not really something that OBS weirdnesses in terms of history handling could really break that terribly. So that part has been actually relatively trouble free so far. That's pretty awesome. So with all that you've done so far, what would be the next thing you'd want to tackle for this? So we are the current plan is so there's there's still a bunch of stuff, stuff that's not really working. So for instance, I think it's what's not really very well working is if you if you have a locally checked out project, updating that via the extension is not that's not possible. So that would be something that I'd that I'd like to add showing built results would be would be nice. But we are also currently looking into looking into creating something that will to create an extension that would be essentially this thing, but for our container application delivery platforms. So I'd say then the next steps for this are mostly also waiting for user input. So I want to present this in a also in a few places and hopefully get some get some people get more people than currently to use it and to say what works what doesn't and what they'd like to see. Since so far it's been so far it's been mostly modeled after my workflow. And that's not really representative. So in case you want to use something like that, but would like it to be able to do something open an issue on GitHub. Please. Well, I mean, thank you for doing this. This is great. Congrats. And keep up the great work. Thanks. Glad you liked it. So unless there's any further questions, as I said, if you'd like to give this a try, you can find it in the VS Code Store. There should be links to the to the GitHub pages. So if there's something you'd like to see, just open an issue and or if you something breaks, you can find that through through GitHub. So thanks you all for for attending. Thanks for sticking sticking here for so long. And hope you give it a try if you're if this is something you could find useful. And with that, thanks and I'll sign off.
Dan joined SUSE to work on development tools as part of the developer engagement program, after working on embedded devices. He is an active open source contributor being involved in various upstream projects and a package maintainer in downstream Linux distributions, like openSUSE and Fedora. Beside testing and cryptography his passions include automating everything, documentation and software design. The developer engagement program at SUSE has launched the Open Build Service Connector, an extension for Visual Studio Code that integrates it with the Open Build Service. Its purpose is to ease the interaction of developers with OBS to package their software without having to leave their editor. The implementation has not been without surprises, both from Visual Studio Code and the Open Build Service. We illustrate some of the expected and unexpected features of both that we faced on the way and the challenges resulting from that. The second part of the workgroup will be an interactive live demo of the current state of the extension and how you can use it to interact with your favorite instance of the Open Build Service.
10.5446/54698 (DOI)
So, welcome to the Paris Peking Tokyo seminar. So it's my great pleasure to introduce the speaker, the first speaker on this Zoom session today. So, the speaker is Arthur Seijou-Rubula, also he will tell us the prismatic doness theory. So please start. Okay, well, hello everyone. And well, thank you very much to the organizers for giving me the opportunity to speak and for setting up everything in this very particular context. So my talk of today is about prismatic doness theory. And so everything I will discuss is a joint work with UNS Anschutz. And for all this talk, I will fix the prime number P once and for all. So the goal of our work was to prove some classification results for P-isible groups of various kinds of rings. And the main tool we used to do this is the recent theory of prisms on prismatic homology, which has been developed by Bat and Scholtz. So the plan of my talk will be the following. I will first spend some time, maybe 20 minutes, half an hour to discuss a little bit prisms on prismatic homology. So recall the basic definitions and constructions. And then I will start speaking about the joint work with Uranus. So first of all, I need to tell you about which kind of rings we want to classify P-isible groups. And these rings are called quasi-syntomic rings. So this is what I will do next. I will explain the definition of these rings. Then I need to tell you by which kind of objects we classify P-isible groups over such rings. And we call them fitter prismatic geodontic crystals. So I will explain the definition and some basic properties of these objects. And then finally, I will explain our main results and say a few words about the proofs and some corollaries of these results. Okay. So let me start with prisms on prismatic homology. And I should make it clear. Everything in this part is true to but and also. So the theory of prismatic homology relies on two important basic definitions, the notion of a delta ring and the notion of a prism. So let me start with delta rings. So I will assume always that my rings leave over Z localized away from P. And then the delta ring will be a commutative ring A together with a map of sets, just of sets, delta, which goes from A to A, which has the following properties. So first of all, it maps 0 and 1 to 0. And then you want to prescribe how delta behaves with respect to multiplication and addition. And this is given by these two formulas, which may look a bit strange the first time you see them. So delta of a product if x, y for all x, y in A is x to the P delta y plus y to the P delta x plus P times the product of delta x by delta y. And then you have a similar formula for the addition. So delta of a sum x plus y is just delta of x plus delta of y. And then you need to correct it by this term here. And so observe that if you use a binomial formula to expand this x plus y to the P, then all the terms except x, P and y to the P, which are cancelled here, are actually divisible by P. So this expression makes sense in any ring. And you don't need to assume that the ring is P times 3 or anything like this. OK. So this is what the delta ring is. So the ring A together with a map of sets satisfying these three properties. So if one has a delta ring, A delta, so you can do the following operation. So you can define a map phi, which goes from A to A, and which sends x to the P plus P times delta x. And then you check an exercise that the identity is satisfied by delta, which I had before over there. Actually, exactly what you need to check that this map phi, or just a map of sets from A to A, is actually a ring morphism. And moreover, just by definition, it's a lift of four binos. So if you kill P, then these terms disappear. And it just becomes x. It just becomes a four binos. X goes to x to the P. And conversely, assume you start with a ring, commutative ring A, together with ring morphism phi, which lifts four binos modulo P. Then if you assume that the ring you have is P torsion free, you can actually just divide phi x minus x to the P by P and define this way, a delta structure on your ring. So in other words, in first approximation, you can think of delta rings as rings together with a lift, with a ring morphism, which lifts four binos. But the two notions are not exactly the same thing when you have P torsion in your ring. OK. Another point of view, which I just want to mention about delta structures, is the following. So if you have a ring A, then giving yourself a delta structure on A is in fact the same thing as specifying a ring morphism from A to the ring of length to V vector over A, which will be a section of the natural projection on the first component. And the recipe for this is assume you start with a delta structure, delta on your ring A. Then you just look at the map from A to W2 of A, which sends x to x, delta x. And once again, you can check that the axioms about delta, which tells you that you have a delta ring, is exactly what you need to verify that this is a ring morphism. OK. And this point of view is useful because it allows you to prove that this category of delta rings, contrary to the category of rings with a lift of four binos, has all limits and collines, which you can just compute on underlying rings. And so in particular, this forgetful factor from delta rings to rings, we have both the left and the right adjoint. And the right adjoint is given by the V vector's function. OK. So that's the first remark. And then another remark, which is more an exercise that you can do. I said that there are examples of delta rings with p-torsion, but it can never happen that you have a delta ring in which p to the n is 0 for some A. OK. This is something you can check just using the definition of the delta. OK. Very good. So we will see some examples later. Oh, wait a second. It's a trivial remark that 0 is a delta ring, so p to the n. Yeah, OK. I assume that 0 is different from 1. But OK, yeah. If you want. No, but if you want all limits and collines, then maybe. OK. Yeah. Then I should have said that there is no non-zero delta ring in which p to the n can be 0. Thank you. OK. So as I said, we'll see examples soon, but no, I'll come to the next definition, namely, the definition of a prism. So what is a prism? It's just a pair, a, i. Well, a is a delta ring. So usually in the notation, I will just forget the delta. OK. I'll just say a is a delta ring without mentioning delta. It's implicit. So you have a delta ring, a, and then you have some ideal i inside a. And again, this pair has to satisfy some properties. So first of all, you require that I define the Carti divisor on spec a. So it's just like locally principle generated by a non-zero divisor. Then you also require that I is p iidically complete. So for technical reasons, you should mean this in the derived sense. Also, soon we will make some assumption on the ring, which I'm sure that in practice, like the derived on the classical completions agree. So first approximation, you can just think that this is classically p iidically complete. And then the last condition, the most important one. So I has to be a pro-zariski locally generated by a distinguished element. And what is a distinguished element by definition is just an element of D, which has the properties of delta of D is a unit. Okay. And once again, in practice, I will always be principal. So you should just remember that a prism is a pair a, i, where a is data ring and let's say i is principal generated by a non-zero divisor, which is distinguished in the sense that it's made by delta is a unit. And moreover, the ring should be p iidically complete. Okay. The important condition is the last one. And so now I can give two examples of such prisms. So first of all, okay, if you have a p complete on p torsion free delta ring, a, then the pair formed by a and the ideal generated by p is a prism. Okay, so I mean, remember, so you need to check three conditions. So first of all, I had assumed that p, there is no p torsion, so please non-zero divisor. And by assumption, I also require that my ring is p complete. In this case, I is just p. And finally, the only thing you have to check is actually that delta of p is always a unit. Okay, and this, I mean, this is also what you need to do this little exercise I gave before that it cannot happen that p to the n is zero. I mean, the way to prove this is just to check that delta p has to be unity in any data. Okay, so that's one first class of example. And now here is another interesting example of prisms. So, let me give one more definition first. So we will say that the prism is perfect. If it's for Benus phi. So remember, whenever you have a prism, sorry, a delta ring, you have this delta, then you can define a phi which leaves for Benus. There is a formula that phi to the x is x to the p plus p times delta x. So what you require is that this ring morphism phi is actually an isomorphism. And then the claim is that actually the category of perfect prisms is the same as the category of integral perfect with rings. And how do you see that? Well, I mean, you can define a function in both directions which are quasi inverse of each other. You how so in one direction assume you start with an integral perfect to a dream. Then you can do this classical font and construction. You can consider a inf of r. So what you do is first you tilt your perfect ring. So you get perfect ring of characteristic p r flat. And then you take its string of bit vectors. This is what is called a inf of r. And a inf of r comes with a natural map sitar, which goes from from it towards our. I mean, part of the definition of at least consequence of the definition of an integral perfect to a dream is that this will be principle generated by a non zero divisor. And you check that in fact, this generator which is usually denoted by Xi in p. So the first theory is in fact distinguished in the previous sense. So, namely, so I should have said first that this ring is p torsion free. Our flat being perfect. And so it has, it comes with a natural for being used. And you just take the delta structure attached to this to this for being used to. And then my claim was that the generator of this idea is actually distinguished. Well, the way I mean you can check this by proving more generally that in such a delta ring and that delta ring of this form bit vectors of some perfect ring. An element is distinguished it on only if it is primitive of degree of degree one, which means that when you write the expansions of some of the time times powers of p, then the coefficient of p has to be a unit. And this you can check for. Okay, so this is one one factor. And then if you want to go in the other direction. It's even more simple you just mod out. You have a prison AI, which is assumed to be perfect. And then you just look at a mod. How do you recover the data. How do we get delta. Yeah. Well, okay, I said that if the ring is p torsion free, which is a case for this ring a info far, then having a data structure is the same as having a problem you see. And you have a problem you send the ring a bit like. So you just take the data structure attached to this to this for being. Okay, and so this, this proposition is the I mean this example is the reason why I guess button shows up. Like, describe prisms as some kind of deep perfection of the category of perfect to do it because you see that inside the category of all prisms I have this. And subcategory from my perfect prisms and this are exactly the same thing as perfect. And I is like choosing I my ideal is be like is the same as choosing some until my of the tilt of my perfectly. Okay. Okay, so once you have done this, namely introduce data rings and prisms, you can define the prismatic side. And there are several versions of it. So, for us, the one which is really relevant is the absolute version. So, let's start with our question. Hmm. Could you unmute the person. Yeah, I did not hear anything. Okay, she could you unmute the way the person who asked the question. Yeah, so I'm trying to do that but it doesn't work. Maybe. Yeah, there's a question on the microphone. There's a problem on the microphone. Okay, so maybe he can ask his question by chat. Okay. Okay, so I define those are the absolute prismatic site. So let me fix a ring R, which is assumed to be purely completely. And then the absolute prismatic site of the ring R, which will be denoted by our prison. Okay, so this symbol is supposed to be present, even if it appears as a delta here. So as a category, it's just the opposite of the category of all bounded prisms BJ together with a ring map from R to be more J. Okay, so here there is one adjective which I did not define yet, bounded. So it's just again, a technical condition, but which you can forget in first approximation. It's telling you that you have your prison BJ. And what you require is that be more J here as a bounded P infinity torsion. And this just means that if there exists some integer n big enough, so that any element which is killed by your power of P in this ring P mod J is actually already killed by P to the end. And this is a condition you put for technical reasons, which have to do with with derived versus classical completions. Okay, but I mean basically an object of the site is a prism with a map from R to its reduction be module J. And then you put a topology on this category so you define covers to be morphism of prisms. PJ goes to be prime shape prime. So morphism of prism is an obvious notion. So it's just a morphism of data rings compatible with a data structure, which sends J into J prime. And you say it would be a cover if when you just look at the underlying ring map from B to B prime. And it's just a P J completely facefully flat. So this means that if I take the derived answer product of B prime with B module P J over B. I mean, first of all, it has to be concentrated in the zero. And it is then facefully flat over B module P J in the usual sense. So slightly weaker notion that the notion of facefully flat ring morphism. Someone would just require a condition module of PJ. For the reason that again everything is assumed to be complete. And you want to notion which is table under completion. So instead of looking at facefully flat morphism you just look at PJ completely facefully flat. So there's a question. Okay, please. Okay, so first for the prison several technical so in the morphism of prison you require the J goes to J prime but I suppose it's for it should follow the J generate J prime is it correct. Yeah, this is true. Yeah. And also you need to have a finite disjoint unions of I mean when you have the topology also wants a risky covering of this and like an opens. Here you just have covering by one thing of course you need to add like the disjoint union of and load or the couple. Well, if he's a product of situations then it's taking all of them will be a covering by and things. Okay, so yeah I agree. Okay, so it should just be generated by this. This covers. Okay. Okay, no, no other question. Okay. So this is the definition of the prismatic side. And then you have two natural pre shifts on this site. So one is denoted is denoted by oprysum. And the other one is denoted by oprysum bar. This is a functor which define on the prismatic side which send a prism BJ on the prismatic side to be. So this is for for oprysum and oprysum bar will send BJ to be more J. And something bad should check is that these two functors are actually shifts on this prismatic side. And they both have a name so this chief or prism will be just called the prismatic structure shift. And the other one oprysum bar is called the reduced prismatic structure shift. And then I mean, okay, something you can deduce from this is that you can also you could also consider the functor I prism which sends just PJ to J itself. And then you this is also a shift on this side. Okay, so you have these two shifts oprym these three shifts on this prismatic side. And this is the ones we will use later on. But before doing that I want to mention that okay, I define the absolute version of the prismatic side, you could also do the following so let me fix to start with a bounded prism a comma I. And then I require that my ring R is leaving over AI so R is a P complete a mod i algebra. And then you can define the variant of the absolute prismatic cycle where everything leaves over a. So it will be denoted by R over a prism. And it will be the category of all prism BJ, which live over AI so together with a map of prisms from AI to BJ. As before, you want that be more J receives a ring morphism from R, which now is required to be a morphism of a mod i algebra. So everything leaves over my bounded prism AI which I fixed at the beginning and the topology is as before. And this is the version that batches will use and what they do I mean one of the main objectives of their paper is to compare what you get using this notion with other classical more classical p. Addic homology series. And so to do that, you, you, I mean, once you have defined the site you can define prismatic homology. So, I keep the same notation as before so AI is my fixed prism and R is living over a mod i. And then the prismatic homology of R over a, which will be denoted by prism of R over a is simply the homology of my shift or prism on this relative prismatic site. R over a prism. And well, actually, to be more precise you only make this definition when R is assumed to be a formally smooth over a mod i. You could do it always, but it's not well behaved. And the way batch also define prismatic homology for general a mod i algebra is using left hand extension from the smooth case. In the same way as you, you would define the cotangent complex from the shift of degree one differentials in general. But at least when, when everything is smooth, then the definition is just the community of the shift on this side. And as I said, what batch also do is they compare this new community theory with other periodic homology series. And I just want to mention two comparison results I prove there are many of them. Namely the hot state and the crystalline comparison series. So for this, because I recall notation AI is fixed prism, which is assumed to be bounded. And R is formally smooth over a mod i. Okay, so the first result is the hot state comparison. So it tells you that. Let me start with the right hand side. So here you have my prismatic homology complex, which I defined before prism arm of a. And you take its direct answer product with a mod i over a. So it would be the same as considering the community on the prismatic side of my reduced prismatic super shift or prism bar. And I consider a community of this in some degree. And the claim is that this is kind of canonical isomorphic as an arm module with the module of degree I differential forms on R over a mod. And here I mean it's implicitly assumed to be purely completely up to some small twist, which is denoted by the symbol is an I. So boy kissing kind of twist. So I recall the notation below. If you have some a mod i module M, you will be not by M twisted by I the tons of product of M with I mod I square I times over a mod. So this is a rather surprising result when you sing a little bit about it because I mean you have made this definition of prismatic community just using delta rings and prisms. And you see that naturally when you compute it, I mean when you compute the reduced prismatic homology the community groups of this complex. So you see differential forms showing up. Okay, and as a remark, I said before that the other question. Yes. So, you didn't define what is be completely smooth I imagine that completions of smooth things or direct limits, fitted limits of those are completely smooth but what is the exact definition. So I would say that I have a map a to be, I would say it's be completely smooth if I take the derived answer product of be with a mod P over a and I require this to be sitting in the zero and being smooth over a mod P in the classical finite type sense. Yes. Okay. Other question. Okay. Great. Okay, that's a remark that I said before, if you want to define prismatic homology in general, you don't do it just by computing homology of the structure shift on the prismatic side. You do this process of life can extension. But once you have done this, I don't want to explain it in detail, but you can check that this hot state comparison result will actually generalize as follows. Namely, if you have some a mod i algebra are P complete but not necessarily smooth. Then it's reduced prismatic homology so this complex over there. The base change to a mod i of prismatic homology. It actually comes equipped with a natural filtration which is increasing. And as the properties of the great pieces of this filtration, which is called the conjugate filtration are given just by which powers of the cotangent complex of our over a mod i and suitably shifted on break is interested and periodically completed. So, I mean, this is something you can directly deduce from the definition from the previous hot state comparison, plus the definition of both sides in general. I wanted to point this out because one, I mean, we will see the cotangent complex appearing later one later again. Basically the moral what you can remember from the statement is that hot state comparison gives you a way to like have some control on prismatic homology or at least its reduction module i in terms of the cotangent complex. So if you have some information on the cotangent complex you can usually deduce some interesting properties of prismatic homology. Okay, and then the next statement is the so called crystalline comparison. So this is the case where you assume that in your fixed prism AI is generated by P. Okay, then in particular because our leaves over a mod i it means that P zero in your ring. Then you could ask the question how does this prismatic homology relate to another interesting crystal, come on, this year in in characteristic P, namely crystalline commodity. And the answer is that, actually, they are almost the same. So if you compute crystalline commodity of our A. Then, well, it is the same as prismatic homology except that here on the right hand side, you have to twist by four minutes so you take pullback along the four minutes of a. Okay, so in particular if you know what prismatic homology looks like you recover crystalline commodity. You can not necessarily go the other way because a is not assumed to be perfect so five is not necessarily an isomorphism. But at least if you know prismatic only you recover crystalline commodity. And this is compatible with the strict with the Frobeno structure on both sides. This is also quite surprising because this way you get a definition of crystalline commodity without choosing divided powers and anything like that. And the key technical statement to check this is funny exercise that if you have a repeat ocean free data ring. If you have some element in this delta ring, so that it's first divided power is in your ring, then actually all the other divided powers are also in the ring. This is something you can check using the existence of the delta structure. This is one of the key inputs to in the in the proof of this crystalline comparison. Okay. Oh, there's a question. Just a clarification in the theorem you wrote the upper stuff, the kind of pullback. But since you're always working the derived the complete things is it's either some completion there or is that the bright. Maybe I'm confused but wait. I'm not sure now. I don't think you need to complete. I think what you say is that the delta. Yeah. No, go ahead. No, I as far as I understood you characterize the your you characterize your your things like guarz vitz forms is only after modding by I and also maybe you to be complete so it seems that everything is something derived complete relative to your finals and so no, and when you take fear, drying anche start by destroying this, I'm not Yeah. Okay. Okay. Okay. Good, so, okay, that's all I wanted to say about prismatic homology in general. Now I turn to prismatic doodonissiri itself. So as I said, I need to explain over which rings we want to classify feasible groups and by which kind of objects we want to classify. So I start with the rings. So there will be again definitions. So the rings we will consider are called quad asymptomic. So ring R is said to be quad asymptomic if it satisfies the following conditions. So first of all, it's P complete. So this we always assume everywhere and with bounded P infinity torsion. So I recall that this just means that there exists some integer n so that everything killed by power of P is already killed by P to z. Okay, and then the really important condition in the definition is that you want the cotangent complex of R over ZP to have P complete tor amplitude in degree minus one zero. So this means that you take this cotangent complex and you take its derived answer product with n for any R mod P module n. And then you want that these objects leaves in degrees minus one zero and the complex of R mod P modules. Okay, so this is the absolute notion somehow and then you can also define what the quad asymptomic morphism is. So it will be a morphism of P complete with bounded P infinity torsion rings. R goes to R prime which, okay, first you want that R prime is P completely flat over R. So I already explained what that means. And then you want that the relative cotangent complex of R prime over R as P complete tor amplitude in minus one zero. Okay, and you can also define what the quad asymptomic cover is. This would be useful later. It's the same definition, but instead of requiring that the map is P completely flat, you want it to be P completely faithfully flat. Okay, but so, yeah, the important condition is really the condition on the cotangent complex. And this definition is due to, I mean, it appeared in the paper of Batmouro and Schultzern topological orgy-domology. And the idea is that it should extend in the world of periodically complete rings. So there's some trouble. We lost that speaker. Ah, yes, coming. Hello. Ah. Okay, does it work? Sorry, I think we... Yeah, no, we convert our connection. Well... Okay. Is it okay? Yes, sorry. So is it good now? Yeah, it's good now. Okay, sorry, I think the connection, okay. That was, I don't know. Okay, so I was just saying that this definition is an extension of the classical notion of symptomic, well, of LCI ring and symptomic morphism. But you don't make any Nussarian or finite type assumption in this definition. So, yeah, before giving examples, one more piece of notation. So I will denote the category of all quasi-syntomic rings by QC. And then you can look at the opposite category and consider the topology which is defined using the quasi-syntomic covers in the above sense. So as I said, like maps which are quasi-syntomic and which are P completely phase-free. No, sorry, P completely phase-free. Okay, and then as a notation, if R is an object of this site, I would just denote by R with small letters QC, the sub-site which is formed by all rings which are quasi-syntomic of R. And again, undone with this quasi-syntomic topology. Okay, so I give examples of such rings now. The first example is just to justify the claim before that this generalizes the classical notion of LCI ring. So the claim is that any P complete and SIN ring which is locally complete intersection is quasi-syntomic. And well, this is checked using, I mean, actually Avramov gave a characterization of such rings LCI in terms of the cotangent complex. But here you just need the easy direction of Avramov C of M to prove that any such ring is quasi-syntomic. Okay, so that's one first class of example, but you also have like huge rings in this category of quasi-syntomic rings. So, namely I claim that any integral perfected ring is quasi-syntomic. And the reason for this is, okay, well, remember you have to check, okay, first of all, it's purely completely by definition. Then you can also check that for perfected ring, if there could be some P torsion, but like the P torsion is just the same as a P torsion. So the first two conditions are checked. And then you need to check this condition in the definition about the cotangent complex. And for this, well, you observe that this map from ZP, I mean, the canonical map from ZP to R, it actually factors through, you can factor it through the CTA map. So I should have written that this second map here on the right is Fontaine Zeta map. And whenever you have such a composite, you get a triangle for the cotangent complex. And now you observe that, well, what is a inf of R once you mod R of P? It's just R flat, which is a perfect ring of characteristic P. And whenever you have a perfect ring of characteristic P, its cotangent complex over FP is just zero. Basically here is just that. If you take any X, like it's always of the form Y to the P for some Y, because the ring is perfect. And then DX is just like P, Y, P minus one, DY. And if P is zero, this is just zero. So this way you can check that the cotangent complex of something perfect over FP is zero. But this just tells you that mod P, I mean the P completions of my cotangent complex of R over ZP and the cotangent complex of R over a inf of R agree. Right, because in this triangle, the other term will vanish after P completion. But so then once you know this, you can just, you are reduced to describe this cotangent complex of R over a inf of R. But then this map is a subjective and the kernel, theta is by properties of perfected rings, is principal and generated by a non-zero divisor. So this means that then the cotangent complex is just the same as R, but shifted leaving in common, common logical degree minus one. Okay, so this way you check. That in fact, in this case, the cotangent complex even has term amplitude in degrees minus one, minus one. It's even better. And then from these two class of examples, you can construct other examples if you want. So you can take a smooth algebra of our perfected ring and take its P completion. Or you can take a perfected, sorry, yeah, take an integral perfected ring and just mod out by a finite regular sequence. So if it's again, if it is again, bounded P infinity torsion, then this would give you another example of quasi-syntomical rings. And here I list some examples. So you can take the theta algebra in one variable over OCP. You can take OCP mod P or you can take this characteristic P perfect ring, FP T one over P infinity and mod out T minus one. So these are all examples of quasi-syntomic rings. Okay, so I think that's basically all I wanted to say about quasi-syntomic rings, but I just wanted to try to convince you that many interesting examples of rings are actually quasi-syntomic. Okay, and one good point of this about Mohr-Scholz's definition is that it's purely a definition in terms of the cotangent complex. And we have seen before, this was a hot state comparison theorem that if you have some control on the cotangent complex, because of this hot state comparison, you can usually deduce things about prismatic correlation. Okay, now I turn to third part. So filtered prismatic do-donate crystals. This will be the objects we will use to describe our P-zibber groups. And as you can guess, the definition will use the prismatic side. So to state it, I first need one observation. Let's take R to be quasi-syntomic. And then the claim is that, okay, you have a natural morphism of topos, which goes from the category of sheeps on the prismatic side, the absolute one, to the category of sheeps on the small quasi-syntomic side of my ring R. And if you wonder how this, what does this morphism of topos come from? Well, it's defined as a composition. So first of all, you observe that if you take a prism AI, so I mean, if you take some object of this prismatic side, you have a prism AI, then you can look at a mod i. So it will be P-complete. And the claim is that, this defines a, a co-continuous function from the prismatic side of R towards the big quasi-syntomic side of our ring R. And then you just restrict. Yeah, but I'm, only difficulty is checking this co-continuity of this, of this function. But if you are familiar with crystalline common logic, it's like, very similar to what you do when you, you go from the crystalline side to the etal or the risky side. Except that here we work with something a bit more general. We work with the quasi-syntomic topology. Okay, and now what I will do is I will take my prismatic structure shift or prism on this ideal prismatic shift. I prism, I just push everything using this morphism V. So V was my notation for this morphism of topos. I just push everything down to the co-syntomic side. And then I claim that, okay, actually you have a natural surjection from oprys to O. And I will give a name to the kernel of this morphism of shifts. So O here is just the structure shift on the quasi-syntomic side. This kernel is denoted like this. So it's what is called the first piece of the Niagara filtration on this prismatic shift oprys. Well, the reason for this notation is that, I mean, there is this notion of Niagara filtration of a prism, which is defined for any, you could define for any positive integer I. Here I only need the first piece of the Niagara filtration. I would just take as a definition that it is the kernel of this surjection, which I did not explain. And then a property of this is that, well, because any data in other Frobenius, so this shift oprys will come with a Frobenius morphism, Phi. One can check that this kernel has the properties at Phi of the kernel. So Phi of the first piece of the Niagara filtration for oprys actually lies in high-price times oprys. So morally on this first piece of the Niagara filtration, the Frobenius is divisible by, well, let's say the locally I, the ideal is generated by non-zero distinguished element. The idea is that on this first piece of the Niagara filtration, Phi is divisible by this distinguished element. So there is question from over here. Yeah, there's a question. I cannot hear. Can you put the microphone on? You should activate his. I guess you can unmute him. Oh, yeah, now his microphone must be on. Internet, I don't. Oh yeah, now you can speak. So why do you write oprys times oprys? I think oprys is an ideal in oprys. Oh, yeah, sorry, sorry. Yeah, it's a typo. Yes, thank you, sorry. Yeah, it's just because later it will appear. Yeah, sorry. Okay. Okay. Okay, so then the definition of a filtered prismatic dunic crystal is, so let me again fix a quadsentamic ring R. And then a filtered prismatic dunic crystal is by definition of collection, a triple M, fill M and five M. So M is finite located free oprys module. First of all, then fill M inside it will be some oprys sub module. And finally, five of M is morphism, ring morphism from M to M, which is assumed to be a file in R. Okay, where five is a Frobenius of oprys. Okay, and then you ask three conditions on this triple. So first of all, you want that this Frobenius five M sends a fill M to I press times M. Okay, but then there is an obvious oprys sub module of M which also have this property that five M of it is leaves inside I press times M. Namely, consider the first piece of the Nagat filtration of oprys times M. Then because of the previous slide, we know that this is contained in I press. And so this sub module, first piece of the Nagat filtration times M also has a property that five of it is contained in I press times M. And then the second axiom you have is you want that this sub module is in fact contained in fill M. And once you have done this, you want to fill M and once you have done this, then you know that M module of fill M will be a module over oprys module of the first piece of the Nagat filtration. But remember that oprys module of the first piece of the Nagat filtration was by definition is just the structure shift of the quiescentomic site. Oh, so M mod fill M will be an O module and you ask that it is finite locally free. And then the third condition is that the image of the filtration by the Frobenius is big enough in the sense that it will generate I press times M as an oprys module. So if you are familiar with crystal and deudonis theory, again, it's very reminiscent of the usual notion of deudonis or filter deudonis crystal. And the third condition is just in this setting which has been a formulation of the condition that the filter crystal is admissible in the sense of quotient. So that's somewhat just the obvious extrapolation of this classical notion to the setting of the prismatic site. Okay, and then notation is if R is my quiescentomic ring, I will denote by Df of R the category of all filtered prismatic deudonis crystals over R. And the morphism are the obvious ones. I mean, they should be oprys linear and they should be compatible with Frobenius and with the filtration. Okay, and so now I can come to the statements of the two main results we proved. So I will fix again once and for all now quiescentomic ring R. And let me take G to be a feasible group over R. Then I can define M prism of G to be X1 of G by oprys. And here when I write this curly X1 is supposed to be like the local X groups in the category of abelian sheeps on the quiescentomic site. So you check that your feasible group G, where the reduction to the case of finite locality free group schemes defines a sheep, an abelian sheep on the quiescentomic site. And oprys is also an abelian sheep on this site. So you can take X1 in this category. And then you also define fill M prism of G to be the same thing except as you replace oprys by the first piece of the Naga filtration. And then the first main result is that, so if G is as before, then this triple M prism G, fill M prism G and for benus of M prism of G, which is just by definition the for benus coming from the for benus on the prismatic on the prismatic sheep. Then the claim is that this is actually an objective of this category DF of R. So this is a filtered prismatic dodomic crystal over R. And in what follows, I will denote it by M prism G underline. Okay, and as a remark, if P is zero in my ring R, so in the characteristic P situation, you can check using the graph, you can check using the crystalline comparison theorem, which was discussed in the first part, that in fact, this is just the same thing as the usual function you will find, people have studied in crystalline dodomy theory, which you find, for example, in the book of Bertolo, Brin and Messing. So this is nothing new in characteristic P. Okay, and then our second main result is that, well, this filtered prismatic dodomy factor underline M prism, which associates to G, M prism of G underline. It actually realizes an anti-equivalence between this category of BT of R of visible groups over R, and the category DF of R, which I defined before. And as a bonus, I mean, as a byproduct of the proof, you actually obtain that the prismatic dodomy factor, G goes to M prism of G, is already fully phase-free. But if you really want to get an equivalence of categories, so if you want to be able to describe the essential image, you need to add the filtration in the picture. Okay, so now I make a few remarks about the, these two results and about the way we prove these two results. So first remark is that, as I said before, in characteristic P, you just recover the usual factor from dodomy theory. And so in particular from the serum two, this classification result, well, you can deduce that the crystalline dodomy factor is an equivalence for all quasi-syntomic rings in characteristic P. And well, this was actually already known, of course, in some cases. So if you look at LC irons, which are also excellent, then fully phase-freeness was proved in the end of the 90s by To-Yong and Messing. And more recently, Eichelot has proved that this factor is actually an equivalence when the rings is, okay, Nocerian LCI and more of a F finite. So this means that the Frobenius is finite. And this particular implies XM. So in this case, the result was already known. Okay, then I wanted to explain now that this category DF of R may seem a bit abstract, but there is an interesting class of quasi-syntomic rings for which you can make it more explicit. Namely, you will say that the ring is quasi-regular semi-perfectory. If it is quasi-syntomic, of course, first, and then you also want that there exists that a perfected ring which maps subjectively onto R. It has to be big enough. Examples of such in the list of examples I gave before, where you can look at any perfected ring is obviously, because we know it's quasi-syntomic and the second condition is trivial. Any perfected ring will be quasi-regular semi-perfectory, but also like a quotient of such a ring by a finite regular sequence, which has bounded P and P torsion. So these are examples of such rings. Well, for such rings, it's like for perfected rings, and the example we saw, in fact, the Quotantin complex after P completion is just sitting in the minus one. And for this reason, you can check, using Hatch-Tech comparison, that for such a ring, the prismatic side has a nice feature that it admits a final object in this case, which you can just describe as, if you want the absolute prismatic homology, comes with a natural idea. And then also you can check in this case that if you take the first piece of the Naga-Tutration on this prism to be just the inverse image of I by Frobenius, then the quotient is isomorphic to I itself. So two examples of computation of this initial object, or some final object. The first case is if I is perfect to E, then this final object is just given by the ring A-info of R together with the kernel of the theta map. So, okay. In this case, the prism is perfect. So Frobenius is an isomorphism. So there is always some choice that you can work with the map theta, or with its pre-composition with Frobenius minus one, and then consider theta tilde. It's just a matter of conjunction. Okay, so that's the first example. And then another example, if you take a ring which is called a regular somaiparfectoid and with P zero in this ring, then this initial prism is just the same as a crease of R together with the idea generated by P. So you can compute it in several situations. And then we make a definition. It's very similar to the one we had before. A field of prismatic design in module over R will be a collection M, field M, and phi M. But no, I just have, instead of having finite locality free modules over the prismatic shift, I just have a finite locality free module over this ring, prism R, a certain sub-module of it, field M, and a phi in R map, phi M. And you just ask the exact same actions as before, but no, you only work with modules over this ring, prism R. So I want to repeat them because I don't have much time left. But okay, so you can make this definition and then the claim is that if my ring is quasi regular somaiparfectoid, I can in fact just evaluate all the objects I have on this, so I should have written final on this final object. And the claim is that this gives an equivalence between the category of filtered prismatic due to the crystals over R and the category of filtered prismatic due to the modules over R. So in other words, in this case, you can make the category more explicit. You can just work with modules instead of working with this object, this category DF of R. And moreover, if the ring is perfectoid, then you can do even better. Then you can even forget the filtration. So just look at the forgetful function for this category of filtered prismatic due to the modules over R to simply what you can call a prismatic due to the module, but in this case, it already has a name. People call them minuscule break is in FARG modules. And then the claim is, in this situation, this forgetful function is in fact an equivalent. So in other words, as a special case of the CRM2, you see that you recover the fact that for perfectoid ring, peaceful groups over such a ring are classified by minuscule break is in FARG modules. But this is a bit cheating because in fact, we need as an input for the proof, for the proof of CRM2, we need a special case of this. Namely, we need that peaceful group over a variation ring, which is perfectoid and has algebraically closed fraction field is the same thing as a minuscule break is in FARG module. And actually, it's not difficult to do. You can deduce the case of all perfectoid rings from this special case using VT-SAN target. But so we need this as an input. Okay, so can I just take five minutes to finish? Or should I stop now? Okay, go ahead. So I try to finish quickly. So I also want to mention that, as I said, in general, you really need the filtration if you want to describe the essential image of this prismatic do-don't-functor. But I just said before that in this remark that for perfectoid ring, the filtration is actually unique. So it's not needed to state the classification theorem in this case. This is not true in general, but this also works for P-complete regular rings. And as an example of this, I just take the ring of integers in some discreetly valued extension K of QP with perfect residue field. So for example, take a ring like ZP. And then also this case was, of course, already known before. So then you can prove that P-complete groups over this such a ring are classified by so-called minuscule brachycin modules. And this has been done by Bray and Kissing, at least for all P, and then extended by Kim, Lau and Yu to all P. But we can also recover this, namely by first proving that for such a ring, you can forget the filtration. And then checking that you can just evaluate and, well, there is a natural prism attached to such a ring once you choose a uniform measure. It's not a final object, but still you can check by some reduction to the perfect situation that in this situation, evaluation on this object is again unequivalent. And so, and then we check that the factor we have is actually the same as the one which has been studied by Bray and Kissing, all these people. But now the good point is that some of you directly learned in the correct category. So the proof works uniformly for all P. You don't need to make a special argument when P is too. Okay, and then just two words about the proof. So for theorem one, it's not surprising. So you just follow the strategy of Bertel and Bray and Messing in their book. So you have this definition of this triple. You want to check that it is a filtered prismatic dosonary crystal. And the idea is, well, you have to understand how what is X1 looks like. And in fact, for any group object in a topos, you have some device to make computation about this X groups, at least in low degrees. So this is explained in the book of, Bertel and Bray and Messing. And you are reduced to compute some prismatic commodity groups. So first step, you use a theorem of Reno to reduce to the case of Psemer groups which comes from some Abelian scheme. And then, via this Bertel and Bray and Messing partial resolution of any group object in a topos, what you just need to do is understand precisely prismatic commodity of Abelian schemes. And a key tool for this is provided by the Hatch state comparison theorem for prismatic commodity. So, but the ideas are really similar to the one of Bertel and Bray and Messing. And then, theorem two, the proof is more difficult, but I just want to point that the key idea is to use quasi-syntomic descent. So here, I introduced just before the notion of quasi-regular, so my point is that the very nice feature of this site, this quasi-syntomic site, which was observed by Bayt-Maurot and Schultz-Hou, is that this quasi-regular semi-perfected ring actually forms the basis of this quasi-syntomic topology. Or by any quasi-syntomic ring, by extracting enough piece-roots, you can always make it quasi-regular semi-perfected. And this means that once you have defined this factor, so once you have proofs here on one, then you can, to prove that it's unequivalent, you can somehow reduce to the quasi-regular semi-perfected situation. And then everything is more concrete, as I said. Instead of having crystals, we just have modules over this ring prism arc. Okay, and then the hard work starts, but I wanted to say this because it shows that even if you, if you want to, if you are a non-proliferation like getting classification results over like Nucleus and rings, then the proof is such that actually you really need to work with this big category QC, because the argument is by descent. This is very big quasi-regular semi-perfected ring. Okay. Okay, and then I will skip this point. I will skip this point. I just want to finish by saying that there are some natural questions which are left open. First of all, it would be interesting to see what happens for more general baserings. We only prove results about quasi-syntomic rings, but for example, in the work of Tzink, you can find some results for a very general, periodically complicated ring. So, for example, in the work of Tzink, you can find some results for a very general, periodically complete rings. But I have no idea how to do it for more general rings. And then another thing we don't have is something like deformation theory in this setting. So for this prismatic duodonné functor, so for the usual crystalline duodonné functor, you have this gotennik-messing deformation theory, which is very powerful. But here, we don't have any analog of this. And then finally, a final remark is more anecdotal, but it's about the case of perfect rings in characteristic P. I just want to point that in this case, well, because if you live over like an object of the prismatic side of such a ring R, it will be P torsion-free in particular. So you can always map this prismatic structure shift on this side to the same thing where you invert P and take Q to be the quotient of this injectable. And then you check that, well, in this situation, the prismatic duodonné module of any P-zero group G is in fact the same thing as home from G to the push forward to the quadrathymtomic side of this shift on the prismatic side. I mean, this is just obtained by looking at the associated long exact sequence. And then a natural question is whether the, I mean, front end has, was the first to give a general definition of the duodonné module over such a perfect ring of characteristic P. And the definition looks a bit similar, except that instead of this mysterious push forward here, you have this shift of VIT covectors. And so it would be also interesting, and we did not do it to know how to relate this object here to Fontaine's original definition of the duodonné functor with VIT covectors, but without choosing the general crystalline comparison C-ray, which of course implies this comparison, but just directly using the two definition. Okay. Thank you very much. So thank you very much for the interesting lecture. So are there any questions? Yes, Luke. Could you unmute him? Yeah, I'm trying, but it doesn't work. I don't see one. Luke, could you unmute yourself? Luke, can you put the microphone on? Now you can hear me. Yes. Okay. So I think you skipped one of the last slides where you were writing something about classification of finite, locative free community group schemes. You discussed PDVS groups. I presume that you, but presumably you can also classify truncated BTs and maybe more general. So that is the, Oh, I can't hear you well. Okay. So it was just that I wanted to see this slide again. Yeah. But then is it in terms of a module or whether a complex maybe? Okay. So we only do it for over perfected rings. So I think that's the question. I think that's the question. I mean, the idea is the same as the one you find in the paper of kissing. Well, actually you use the fact that you can just like identify the category of final locality free group schemes with the category of like two times complex. Of peace by groups. With an isogenic between them. And the only difficulty in general for general quasi-syntomic rings is I'm not sure by which kind of objects you would classify fine and local if we group schemes. So for perfected rings, as I said, for peace by groups, you can forget the filtration. You just have this minuscule boy kissing fog modules. So it's not difficult to guess what, which can, by which kind of objects you will classify fine and local if we group schemes. So it would just be a module of your ring prism, R, which is just a inf of R in this case of projected dimension less than one killed by your power of P with a four been used on a Vershebu. But in the general situation where you should also take the filtration into account, I'm not sure how to describe the very local if we group schemes. So that's why we only did it for perfected rings. Actually, this was already known in all cases except maybe when peace to my work of law. I had another question you mentioned work by think. So is there a relation between this place and the theory? Yes, this is another slide. I had to skip here. So let me take our game to be one first one reduces to the case of quasi regular semi perfected rings by quite a systemic descent. And then, well, you have this natural map from prism R to R was kernel is the first piece of the nugget filtration on this prison. It's a map of Delta rings. So it's a map of rings. But I said that the forgetful functor from delta rings to rings as a right adjourn, which is the big vectors fun. Yes. So this mapping juices a map of delta ring from prison R to W of R. And using this observation, you can actually check that you have a functor from our category DF of our filter prismatic modules. To the category of displays of our in the sense of. It's not an equivalent. I mean, the classification of sink is both more general and more restrictive. It's more general in the sense that I think he only assumes a ring to be periodically complete. But then he has to restrict to formal feasible groups because when he is to like some difficulties. So this is due to the fact that this functor we get from DF of R to this place of R is not an equivalent. But it is when you restrict to any important objects. The terminology of. Thank you. Thank you. Okay. Thank you. Then offer. Okay. So I remember the concern in this. Welcome to the net. There were some after seeing the way in particular some papers of law where he. He looked at the for example, complete. Mix characteristic. The local rings with. Is it perfect or not perfect. Then he gets some using saying he get, I mean, he gets. He does things which was using windows and friends. So they get a very simple linear algebra. Sinks which classify PD visible groups. Over mix characteristic regular rings, but sometimes he needs to. He needs to get a. To assume it is a. The result is different and the rest of it is not perfect. But any case he has some. Do you get a relation between. What you do and use to, but maybe there are several such references. I don't remember exactly, but I remember it's quite simple. Like you only need to give. Some. Some final free modules and some maps. Composition equal to the equation or something like this. Is it possible to relate it to your. Yes. That's what I mentioned here. So. As I said, in. For such. A P complete regular rings. This the filtration is also not relevant. So you can describe everything. I think, as I said, using currency realms, you can. Get a natural prism attached to the situation. So for example. In the simple case where I just okay. Yes. You just take this break is in prison. So you just take like. Is it your field double brackets. A. U for some formal variable U. And he is some ascent-style polynomial, which is like determined by after you choose a uniform either you. You have a natural map from this track S. Towards. Your ring R and either generator of this. Of the kernel. And then yes, then, then you get a quite simple classification in this case. in this case as modules, find locally free modules over such a ring together with the Frobenius which has the properties that it's, like after linearization it's co-carnalized. After you invert E it becomes, the linearization becomes an isomorphism, the linearization of Frobenius. So, but for checking that the functor is really the same as the one which is used in law. At least we checked it in this particular case, but this is the same as the functor that people usually consider. But in the marginal situation, I'm not sure we checked it. So there was also a technical question about whether the resdiel field is, when the resdiel field is not perfect, I think he needed to work like sync with only formal pt-visual group, but maybe I don't, do you get, do you get easy to, that is classified, well I don't have a, now I'm sorry. When difficulty that low on contours is that, I mean he uses crystalline Jordanian theory, yes, so somehow you have your ring R, some of us you reduce to, to arm of p where you can use crystalline Jordanian theory, but then you need to go back from arm of p to R. So you need to use gotonic messing theory. But the issue is that this, like this divided power structure on the idea and generated by p is not important when p is two. Gotonic messing theory only works well when the divided power structure is important, right? So I think for this reason, you have to do some, usually you have to do some extra work when p is two. But here somehow I think this issue does not appear because in some sense you directly, with this prismatic Jordanian function, you directly land in the correct category. Like again, back to this example where R is okay, in the original work of Boy and Orkissin. Actually you, what you first do is you produce a factor from the category of feasible groups over this ring R towards the category of filtered modules over another ring that's frack S, which is like the p completion of the pd envelope of E inside this ring frack S. And this is usually just called curly S. And then you have to do some semi linear algebra to check that this is indeed the same as the category of minuscule Boy and Orkissin modules. But here we directly get a factor from feasible groups over R towards minuscule Boy and Orkissin modules. For some of we avoid this difficulty and that's why we don't have any assumption on p. I have another small question. Maybe what is, so you said that in the definition of a prism you have a Cartier divisor. You can, are there examples where it is not globally principle? I know I don't know any example writes, writes not principle actually. I think all the examples I'll know the ideals are principle. I see basically one question. Can you hear me, Ophar? Yes. Okay, so the question was, does there exist any coincidence between the filtration of prismatic Jodonaic crystal and the filtration on bun G on the farfontaine curve since Fark proposed that using chromatic filtration one can correspond perfect with BT1 with bun G. One can what? Correspond perfect with BT1 with bun G. I don't know what that means. I'm just reading what it is. BT1 is like Berserk's detail of level one. I guess so, but I guess what is perfect with BT1 is, BT1 over perfect with bun G. I don't know what it means. But maybe BT1 over perfect, okay, I don't know. But I don't know any relation with bun G. I mean, of course, Burkis and Fark modules are related to modifications of vector bundles on the Fark-frontaine curve. But bun G is like the bun G bundle, so. Yes, so I don't see. For a group, they do it with a. I guess, well. Yeah, so front-end. The G should be. G should be reductive group or something. So I don't see any relation. Okay, well, if I, so it's not, I'm not, I will need more to see it in more precise form. So, yeah, so as far as I understand, okay. Yeah, okay, so by any case, you prove that the same construction of display is canonically correspond to what you do, using these functors to display, you get exactly things construction, which is defined for more general rings. Right, yes. Okay, all right, so. Yeah, so then this partly answer, because the work of Lowentz-Holm is kind of trying to use this display to do some complicated tricks to which I don't, okay, so it's, and of course you get also the crystalline deodorant theory from your. Yes, as a. You need to use some morphism between relating the crystalline, okay, but you wrote it, I don't remember if you put it in the slides. I just put a remark, I mean, yeah, basically you can check that the usual, I mean, you have this prismatic side, you have the crystalline side, let's say you just push everything to this color-syntomic side and make the comparison there. And then the claim is that the two categories you get are actually the same, so. For the ring. So the ring is killed by P. Exactly, yes. So you get a, okay, I have to, this was a. I mean, basically this follows from this crystalline comparison theorem. Okay. Okay, thank you, so I will have to think. Okay, not there. The slides will be online at some point. Okay, thank you. You're welcome. Okay, goodbye. Bye. Thank you very much, so goodbye. Thank you.
I would like to explain a classification result for p-divisible groups, which unifies many of the existing results in the literature. The main tool is the theory of prisms and prismatic cohomology recently developed by Bhatt and Scholze. This is joint work with Johannes Anschütz.
10.5446/54699 (DOI)
Okay, so hello everyone. So this is the last lecture in the Paris-Bijine Tokyo seminar. So the seminar will stop but our collaboration will continue in different forms and I would like to take this opportunity to thank all speakers during the past 10 years and my co-organizers from Tokyo, Takeshi Saito, and Sushi Shihou, Takeshi Tsujin, from Bijin, Yon Shun Hu, Ye Tian, and Wiju Jiang, and from Paris for this overview. I would like also to thank four more organizers, Christophe Bray, Ariane Lizar, and Yichal Tien. And it's my great pleasure to introduce the last speaker, Christophe Bray, who will speak on modular representations of G2L for Anonymous My Fair. Thank you very much. Thank you very much to all of you for this nice invitation. So I'm going to talk on joint work with Florian Erzich, Yon Shun Hu, Stefan Morat, and Benjamin Schraen. Okay, so the contents of the talk, we have three parts. In the first part I will recall past results. In the second part I will state a new theorem. And the last part of the talk, which will actually be the longest part, will be some ideas on the proof, fairly precise ideas on the proof. Okay, so let me start with an explanation of the setting and of past results. So throughout the talk, P will be a prime number, and F will be a finite field of characteristic P, which would be my coefficient field for all representations, either on the GL2 side or on the Galois side. And I will assume it is big enough in the sense that it will contain all hexagon values and so on, so that I don't have to worry about that. F will be a totally real number field, where P is un-ramified, and I will fix V, a place dividing P, a place of F dividing P, which will be my fixed place till the very end of the talk. I will only work at this place V. I will fix a quaternion algebra, D over F, which is split at all places above P, and at exactly one infinite place. And finally, I will fix a continuous absolutely irreducible Galois representation of Galois F bar over F to GL2F, which is totally odd and which is modular. So the precise sense of modular will be clear in the next slide. And the general aim of this talk, and not only of this talk, but of lots of work, is to understand better certain smooth admissible representation of GL2FV over F, which are associated to R bar, where FV is the completion of F at V. Okay, so I want precisely to define the representation of GL2FV I'm interested in, and it is called maybe improperly the local factor at V associated to R bar, which I recall the definition, at least the idea of the definition, because it's a big technical. So first recall that for any compact open subgroup of the finite adders of the group D cross, I have a Schimuracker XK over F, which is a smooth, projective algebraic variety over F. Okay. And the first representation one can consider is the following smooth representation of these finite adders over F. First, you take the inductive limit of the H1 et al of these Schimurack curves, which coefficient in F, this inductive limit being taken over the compact open subgroup K. So K is getting smaller and smaller in the inductive limit. And then I take the R bar isotopic part of this of this Galois representation. Of course, there's a Galois action because it's et alcomology. And I assume it is nonzero. This is what I mean by being modular. Okay, I'm not interested in modularity questions here, although at some point, there are hidden somewhere, but I want to study, well, something related to this representation, which of course, I assumed nonzero. Okay, so as I told you, I'm not instructed directly to this representation, I want to study representation of JL to FV. But the problem you see is that we do not know so far if this representation pi of R bar as a restricted tensor product decomposition, as a decomposition as a restricted tensor product of smooth DW cross representations over all the, the finite places double. It's called a, in the classical case, it's a node result due to flat, flat. It is conjectured here, I guess, it's no conjecture now by I think buzzer, diamond and javies, but it is not known. So you cannot define a local factor at V just by using this, you cannot use this. So you have to proceed in another way, which will be sort of ad hoc way, and which will require some weak technical assumption on R bar. And let me just mention that if one day one is able to prove that there is a flat decomposition like this, then it has always already been checked that in that case, the local ad hoc factor that I'm going to define coincides in that, in that case with the factor at V of such a decomposition if it exists. But one can define it directly. Okay, so I need to assume some weak genericity assumptions on R bar from now on. So let me give them to you right away. I mean, this is not so much important for the talk. Oops. So P is bigger than five. R bar is absolutely reducible restricted to this open subgroup of the Galois group. I need some weak genericity assumptions on R bar w, which are the restriction of R bar to the corresponding decomposition Galois group for places W different from dividing P, that I do not give here, not very important. I also need a condition at some places where which are prime to P. If D ramifies at W, I want R bar W to be non-scale. This is not very much important. So here's how one can define the local factor we are interested in. I do not give all the technical details here. This is not so much important. And this is not new anyway. So first one can prove that under these conditions, one can define an optimal open compact subgroup Kv of the finite address of the outside of V. And then a certain smooth finite dimensional representation MV of Kv outside of V over F, which has to be thought of as a type or a reduction of P of a type somehow. And then this local factor can be defined as follows. First, you take the Kv invariant homomorphism from this finite dimensional MV to pi of R bar. And this is not enough. You need to take some subspace for some a few hacker operators at finally many places different from V. So anything that is going on here is at places different from V. We do not touch V. And the purpose of this representation is to get rid of multiplicities that are coming from places different from V. Because you will see in the rest of the talk, I'm going to use multiplicity one theorems. If I do not take this representation, I do not have multiplicity one. I have an artificial metabasitity different from one, which maybe can be dealt with later on. For the moment, we don't want to be bothered with such problems. So we can define such a we can get rid of these problems like this. So this local factor was defined. So we don't know it is local. I mean, it still only only depends. It's a gel to every representation, but it prior fully depends on R bar. So it was defined in the paper myself with Fred diamonds, he gets 10 years ago, and then what generalized in the paper by Emerton, G and Savit that we mentioned again in this paper in this talk. Okay, so pi V of R bar is a smooth admissible representation of gel to F V over F. And it has a central character. Okay, which is this one. Okay, so I am going to recall some known results about this by V of R bar. The first one, of course, is the case of gel to QP. And I'm more precisely the case where F equals Q and these gel to. And in that case, by V of R bar is fully known. So this is a work of Emerton building on work of Colmez of myself, Kizin, of Laurent Berger and other people. It was 10 years ago. So in particular, we know the following three things on pi V of R bar. We know that the Gelfon Kirillov dimension is one. I will recall just afterwards what the Gelfon Kirillov dimension is. We know that pi V of R bar is a finite length as a gel to QP representation over F. And we also know that it is local in the sense that it only depends on the restriction of our bar to the decomposition group at V, our bar V. And I should mention before defining the Gelfon Kirillov dimension that this theorem, I guess, should infile hold as soon as F V equals QP, because then we have DV is gel to QP. But as far as I am aware, this is not known. This is known in some cases in the literature where F is not Q and D is not gel to F V is QP, but not in the generality. But it should, I think it should be true. Okay, but in the stock, we are not going to be interested in gel to QP anyway. Let me recall now the Gelfon Kirillov dimension. There are several definitions I give to you the most, maybe the most direct one. So first, I recall to you the definition of the congruent subgroups KV of N, just one plus P to the N M2 OFV, so M2 is the two by two matrices, which is an open compact subgroup. And KV is the maximal compact open subgroup, gel to OFV. Okay, so we have all these congruent subgroups. And then here's the definition of the Gelfon Kirillov dimension. So I guess it is due to Gelfon Kirillov, but this precise definition can be found in a recent paper by Emehrhorn and Pascunas. So let Pi V be any smooth admissible representation of KV of 1 over F. KV of 1 is the first congruent subgroup. Well, I could, it's an asymptotic definition, so I could even take KV of N for arbitrary N. But so there exists a unique integer, GK of Pi V, which is between zero and the dimension of the PIDK analytic group KV as a ZPIDK analytic group. So in particular here, it is four times the degree of FV, such that you, the following ratio here, the dimension of the invariant of KV by KV of N, which is a finite dimensional vector space because it is admissible, divided out by the P to the N times this integer is bounded by two strictly positive real numbers. So they have to be strictly positive because you could here take a bigger integer and then it will tend to zero. Okay, so you don't want this, of course. So very roughly, you can think about the Gelfand-Hilloff dimension as an integer that measures the dimension of these finite dimensional vector spaces when N is getting bigger and bigger asymptotically. Roughly, okay? Okay, so let me now recall some known results when we are not with GL2 QP. So of course, much less is known. So I need a few notation. Well, F will be the degree of my field FV, which I recall is un-ramified. Q is the cardinality of the residue field. And I will be dot by K and K of one, respectively, with the maximal compact subgroup at V and the first congruent subgroup. So I get rid of the index V. So I think these notation are quite a standard. So you think you can remember them. This one maybe is less standard. K mod K of one is, will be directed by gamma. This is just the finite group GL2 of FQ, where FQ is the residue field. And Z of one will be the center of K of one. And finally, I need to call MK, which is the maximal ideal of the Iwaza algebra of K of one, modular of the center Z of one over F. So maybe I should have called it MK of one, but we don't use really the, it was our algebra of K mod Z of one, only of K of one mod Z of one. Okay. So maybe I should recall that Z of one are trivially owned by V of gamma. This comes from the condition of the central character. That's why in the stock everything will be modular Z of one. And then here is the one state, one nice statement, which is known in that situation for arbitrary FD and robot. I mean, as they are before. So when has the foreign theorem, which, well, let me state it, and then I'll say, I'll say something about the names. So we are concerned with the invariant of 5 of our bar under the first congruent subgroup K of one. So this is of course, a finite dimension or representation. And this is an, and it has an action of K mod K of one, which is gamma. So this is a finite dimensional gamma representation. So it's a tiny, tiny piece of pi V of our bar. So it is also the kernel of pi V of our bar for the action of the maximal ideal of MK. Okay. And this finite dimensional gamma representation, even though you may think it is a small part of pi V of our bar was not so easy to determine. And it is expected explicitly known in particular, it is local. It only depends on our bar V. And most importantly, for, for me, it is multiplicity free, meaning as a representation of gamma. So all the irreducible constituents are distinct. Of, yes, and that's will be the thing that I'm going to use in the sequence. So this theorem was first proven in the case of the uOV of the property will be by emerton G and savvy. This paper that already mentioned that I will mention again in the stock, they made the main breakthrough to prove this result. And they used patching filters. That was the main tool they used. And then it was generalized by three kind of works first the paper in a, I don't know, I think it's chronological. Yes. The paper by Daniel Le, Stefan Moran and Benjamin Schrin. Then some work of Yon Chan-Hu and Aaron Wong. And then another paper by Daniel Le. And all this was built on my paper with Pasquunas of many years ago, which itself built on the seminal paper by Buzzard, Diamond and James. Okay. So we have this multi PC free result. I will come back to this theorem two later in the talk. So it is important for this talk. So I should now make clear that if TV is not yielded to QP, apart from the serum, which is not exactly what we had for jail to QP anyway, none of the statements in serum one are known. So let me recall to you that these statements were the Gelfon carry love, the finite lengths and the fact that the representation is local. Okay. Okay. So now I want to state our main theorem. Okay. So first I need some hypothesis on our Barbie. I need some precise generosity. I put hypothesis on our Barbie that is stronger than the weak generosity hypothesis I had as a running hypothesis in the beginning. So for this, I need the search fundamental character of level F and two F. Okay. So if I want to define them as I'm going to do, I need to fix embeddings because it's not very important. I mean, guess you all know what are these fundamental characters. So first I will assume that our be very semisimple and I will not read by robot. So till the end of the talk, our Vivaal now is semisimple. So I should mention that in all these questions about serweight and so on and these representations, this is always the first case that is usually considered. And then once we understand this case, usually we go to the non semisimple case right afterwards, but afterwards. So I assume it is semisimple and I want some generosity hypothesis. So let me give it to you. Okay. So don't maybe this is a bit technical. You can of course write the restriction to inertia of robot in terms of sers fundamental characters up to twist. So you have certain powers. Of course that occurs. And I want the digits in the P expansion of these powers to be sort of very much in the middle. So between eight and P minus 11, that's the bounds we need. So in particular this implies that P is bigger than 19. Till the very end of this talk. Okay. So I should mention that we have not tried to optimize this generosity assumption. But it could be that working harder, we could get 19 and then even working harder, we could get sorry, we could get 17 and then working harder, we could get 13 and so on. But for the moment, we find this. So P is large, bigger than 19. Okay. Now I want to state our main result. It is the following under this assumption we have the Galifant-Kirilov dimension, which is F. Okay. So on F, D and R bar, this is the assumptions as in the previous theorems and on R bar, it is R v bar, it is semi-simple and sufficiently generic as in the previous one. So I should mention now three remarks on this theorem. First, that of course these assumptions on the R bar being semi-simple and sufficiently generic should be unnecessary. One should always have that Galifant-Kirilov dimension is F. The second statement is that in the paper and by G and Newton, they prove that the Galifant-Kirilov is always bigger than F. They prove this using the patching technique. So they know what is going on at this infinite level, infinity, and then they mod out to get down to 5-year-old bar. And when you do this, you don't exactly know what you lose or not when you mod out. So that's why they only have an upper bound, a lower bound by F. So our main result is that F is also an upper bound. And finally, let me make clear right now that even under these assumptions on the R bar and even knowing the Galifant-Kirilov dimension, so far we do not know if pi b of R bar is a finite length g l to F v representation over F and even less if it is local, meaning only depends on the restriction of R bar to the local decomposition group at F. But we have the Galifant-Kirilov dimension. So the rest of the talk, which you see will be longer than, if you look at the timing, will be devoted to give you a fairly precise idea of the proof of this theorem. Yeah, some ideas on the proof. So we are going to use two intermediate theorems, one which I call the first one and a second one, which will come in two minutes. And I will explain the proofs of these two theorems. And when you put them together, you get the Galifant-Kirilov dimension. So the first one is the following extension of theorem two. So theorem two, let me recall it to you right away. It was this way. Sorry, this way. It was this, when you take the kernel of pi b of R bar for the maximum ideal for the Wasawal-Dibrov's first congruent subgroup, you have something which is MULK-Pistifri as a gamma representation and, equivalently, as a K representation. So what we do in the theorem, the first intermediate theorem is that we take mk to the square. So of course it's not anymore a gamma representation, but it's a K representation, which is finite dimensional. And we still prove it is MULK-Pistifri. So you see, we need general city assumptions for this, because in general, it is not. It's not going to be MULK-Pistifri. But for the moment, we assume this, we need this MULK-Pistifri thing. So that's the first intermediate theorem. And the proof of it is following the same techniques as for the proof of theorem 2, in particular by Merton G and Savit and the followers. In particular, we need patching filters, but it's technically much harder, as you will see. But this is not this theorem that we are going to use directly. We're going to use a corollary, which is not very hard to derive from the theorem, but we concern the Iwo is a group, not the maximal compact K. So let me recall first that the Iwo is the matrices in K that are upper triangular modulo P. So P here is my uniformizer, because everything is in Ramifac, Fv is in Ramifac. And I of one is a prop Iwo, so it is the group of matrices that are upper unipotent matrices. And I really not, as I did for M i, which should be the maximal ideal of the Iwo algebra, of the prop Iwo Rii modulo zero one. And the corollary we are going to use is that if you consider now 5v of Abba, and then you take the kernel by this maximal ideal to the cube. So it is a representation of the Iwo Rii, and it is multiplicity free. So if you take an irreducible representation, a smooth irreducible representation of the Iwo Rii in characteristic P, then the prop Iwo Rii acts trivially on it, because it is irreducible. Hence it is a representation of I mod I of one. But I mod I of one is a finite torus, which is an abelian group and of cardinality prime to P. So the irreducible representation of the Iwo Rii over F are just characters. So this statement means that all the characters that occur as sub-quotions of this representation are all distinct. So you see that you of course need genericity assumption for that. And we are going to use this corollary. Now the second intermediate theorem is the following, which is entirely on the Iwo Rii side. So the first, I mean, apart from this corollary, the first intermediate theorem will be entirely on the, somehow, k and k of one side. And the second intermediate theorem is entirely on the Iwo Rii side. It is the following. Take pi v, which is any smooth admissible representation of the Iwo Rii mod Z of one over F, such that the kernel of pi v by this ideal mi to the cube is multiplicity free, as we had, as we know in the case of pi v of Arba. But here, this is any pi v. Then in that case, the get form kilo of dimension of pi v is smaller than F. Okay, so we recall that the get form kilo dimension is something asymptotic for the compact open subgroup. So I can perfectly, it is perfectly defined for a representation of Iwo Rii. Okay, and then directly follows from this previous corollary in this theorem that the get form kilo dimension of pi v of Arba is smaller than F. Okay, and by G Newton for the reverse inequality, we get the main result. Okay, so now I will explain the proofs of these two intermediate theorems. And I will start with the second one, not the first one, because the second one is in fact shorter, although it was for us the hardest one. So I need some further notation. So first, let me know by pi v was the strange symbol, the algebraic dual of pi v. Okay, and then of course, if so it is a module of the Iwo Rii, the Iwo Rii algebra of the POPI Rii. So when I mod out by the maximal ideal of this, it was a algebra, I get the dual of the invariant and the Iwo, which is a finite dimensional representation, a finite dimensional representation of I mod I of 1. So it is just a bunch of characters, the direct sum because I of I of 1 is prime to pi. So we have certain characters, sky and fire, finitely many, which are what they are, which are all distinct by assumption. Let me denote now by projective proj kai alpha, the projective envelope of this character kai alpha and the category of compact. So here it is truly the Iwo Rii, the Iwo algebra for the Iwo Rii group, not the POPI Rii, but in fact it is just the tensor, you take the Iwo algebra of I of 1, mod I of 1 and you tensor by kai alpha, and then the Iwo Rii acts on this. And this is the projective envelope of kai alpha. So we know that kai alpha does not appear in mi by visual mod mi cube because we use our assumption that by visual mod mi cube by visual is multiplicity free, with the finite dimensional representation of I which is multiplicity free. And since kai alpha already appears in the quotient by mi, it doesn't appear in the kernel. And then using this, it is not difficulties formal using these definitions together with the universal property of projective envelopes to prove the following. So if you, I mean there will be some three things coming, they might look a bit technical, but they are not hard to prove. So first, there, yes, one can prove there exists for each alpha, i equivalent maps, h alpha from two f copies of projective envelope of kai alpha to itself, just one copy, such that we have the following property. First, the image of h alpha is inside mi to the square of kai alpha. The induced map h a kai alpha mod mi to f copies to mi square mod mi cube is injective. And finally, and most importantly for us by the dual will be a quotient of the direct sum of the kukenol direct sum of alpha of the kukenol of all these alpha. So in all of this, we only use this not to be see three things and universal properties of projective envelopes. I mean, and easy stuff on, it was our algebra. So you see that theorem five, the one bounding the Gelfon Kirillov dimension, I mean, first we obviously have that the Gelfon Kirillov of pi v will be smaller than the maximum of alpha because of the last statement here of the Gelfon Kirillov, this kukenol, except you have to dualize back. Okay, so here there's a hidden duality between discrete and compact modules. So you here you are on the compact side, you do you dualize back to get back on the side of smooth admissible representation of the e-worry. And you can compute the Gelfon Kirillov dimension of this kukenol, assuming we have these one and two here. And you take the maximum of alpha and this is bigger than GK because of three GK of pi v. But in fact, it is not very difficult to prove that the Gelfon Kirillov dimension of such a kukenol is smaller than F. And this ultimately boils down to calculation in the gradi drink for the powers of the maximum ideal of this, it was our algebra. And it turns out this credit, credit ring was actually computed in a nice paper by Laurent Closet. Of course, it can also be derived from results of Lazar and so on. I mean, this is not so hard, but it was nice to have this paper of Laurent Closet at hand. And using a not so hard calculation, we get the Gelfon Kirillov back. Okay, so I should mention before I switch, so that's the end of the second interim. So you know, it's not so hard, except that it took us a long, long, long time to find this slide. Okay, the rest somehow is, and the reason is that the rest, there's already an existing strategy, not for this one. But for the first one, there is an existing strategy, which is the one of a Martin G. Sabit, which we are going to end and the followers, which we are going to push one step further. Okay, so now we leave the world of Iwo Ori and we enter the world of the maximal compact. So this is the world of cell weights and all these things. So let me recall that the cell weights is a nearly, it is a nearly simple representation of gamma over F, finite dimension of course. And I will denote as I did for characters, proj k sigma, the projective envelope of sigma and the category of compact modules over Z was a algebra of k. I need k here. So this is an infinite dimensional representation, which if you take the, if you dualize back in the world of smooth representation of k is admissible. And the reason we introduced this projective envelope is that it is enough to prove this by just using the universal property of proj k sigma. Let me recall that the first intermediate theorem that I recall it is here. This is this one, the kernel of pi of r bar for m and k to the square is multiplicity three. So the irreducible constituents are cell weights and we want them to be all distinct. So in particular, we certainly want, we want this to be true. Okay. And in fact, it's even enough to prove the statement for some specific cell weights, which are called cell weights of robot, which are those cell weights, which we know already embeds into pi v of r bar. So such that the home k sigma to pi v of r bar is non zero. So of course in that case, we already know that the dimension here is bigger than one, bigger or equal than one. So we need to prove that this is exactly what. So now sigma from now on, sigma will be a cell weight of robot. And the main tool for that will be the patching function m infinity of m atom g subit, which itself builds on the patching technique of Taylor-Wiles and of Kizin. So I'm not going to recall exactly what it is because wouldn't, I mean, this would be a bit too too technical and would require too much time. But let me just say this is an exact covariant phonter from a continuous representation of K over finite type Wf modules. So Wf is the fit vectors. Well, there's an assumption with central character that you can forget here. To finite type r infinity modules, which of course satisfies several properties in terms of support when you apply it to some types and so on, which I, if you want to know them, you can check the paper of m atom g subit. So here r infinity is a usual patch deformation ring, which in our situation, because of our denerity assumption, will be a full power series ring over the v vectors. So of course, the swanter is, yeah, depends on many, many choices. It depends on the global setting, but also on many choices. But so it's highly, highly non canonical. But we just use it and of its many properties that I will recall when I use them on the SQL of the talk. And they are extremely useful, of course. Okay, so I will now restate the thing we have to prove in terms of the patching phonter. So somehow we are going to lift everything to infinity because it seems impossible to prove this directly. So let me denote by m infinity gothic m infinity, the maximum idea of this local ring. It's power series ring. And let me take v, which is the same as the final dimension representation of k. So the gl2 o fv over f. Then from one, one thing we get from the concept from the properties of m infinity is the following equality. You can compute the k invariant homomorphism from v to pi v of our best local factor v in terms of the dual of m infinity of v applied to this v here, more than the maximum ideal of our infinity. So recall that m infinity of v is a finite type r infinity module. So when you mud out by maximum ideal, it is a finite dimensional aspect space. And I just take the dual. Okay, here also this is finite dimensional because the representation is admissible. So we want to prove the theorem for the multi piece, the theorem for the multi piece is free path, which is most important follows from the fact that this dimension is one, which equivalently is the fact that the r infinity module m infinity of this posh k sigma, but m k square is cyclic, cyclic meaning that you only did one generator or in other terms that is a quotient isomorphic to a quotient of r infinity. So if you notice, of course, then when you mud out here, you get one dimensional vector space. So the dual is also one dimensional and you are done. I mean, you are done for v equals posh k sigma, but m k square. So this is now what I am going to do in the rest of the talk is to give you an idea how one can prove the cyclicity. So far, I mean, this prediction to the cyclicity is not due to us. This is something that is due to MF and G Savit and the followers. So there's no new idea. Now this is, now we really start to be analyzing this representation. So first, there's something you can consider is the kernel that you can mud out by mk instead of mk square. But if you mud out by mk, then you are back in the world of gamma representations. And so it is actually the predictive envelope of the Savit sigma in the category of gamma representation over F. Okay. But here we do not mud out by mk. We mud out by mk square. So it's not any more representation of gamma. So we have to understand this guy. And we can, I mean, this is not so hard. Let me recall what it looks like. First, I need, there's an algebraic part. So let me denote by v2 tau, the following algebraic representation of gamma. I recall that gamma is g2fq as a residue field. So g2fq acts on sim2f to the square. If you fix an embedding, fq into f, which I do, take an arbitrary embedding, then I twist by minus one, that to the minus one. And everything is for the, I mean, is using the embedding fq inside f, which is tau. So I put tau here. And I have as many such algebraic representation as I have such embeddings, which is f. I have f such embeddings, f liter f. Okay. Then you can prove that proj k sigma with mk square as a k representation is an extension of two gamma representation. You have proj gamma sigma as a quotient. And as a subrepresentation, you have a direct sum of the all embedding tau of this proj gamma sigma, times by v2 tau. Okay. And this is a non-split extension for all the push forward, all the direct summons here that you can consider. Let me just also mention that we know what this tensor product is. I mean, when you transfer something which is projective, you always get something which is projective. So we know that this thing is a direct sum of projective envelopes of some cell weights. And we know which are the cell weights. So you need three cell weights. Well, you recover proj gamma sigma, but you have two other cell weights, which are a small modification of sigma in the direction of the emitting tau, which I do not recall explicitly, but which are everything can be made completely explicit. Okay. So this is the k representation proj k sigma module mk square. And for the rest of the talk, I will need to introduce the following quotient of proj k sigma with mk square, which I will call q tau for each embedding tau. So this is the unique quotient of proj k sigma by mk square, which is non-split extension here. So this is a push forward. I cancel anything that is not at the embedding fixed embedding tau. And for the fixed embedding tau, I have this tensor product, which is direct sum of two, and I cancelled this proj gamma sigma in the middle. So I get a non-split extension like this. Okay. And I will use q tau in the next slide. Okay. Okay. So to proceed to prove this, so I recall that we want to prove this zero. m infinity proj k sigma mk square is a kick. So we are going to apply m infinity to all these productive things. But we also need to leave the k representation proj k sigma of mk square as a lattice, as a free WF module with a continuous section of k. Okay. Because then we will be able to relate it to Galois representations and Fontaine's theory. So that's why we lift it. But yes. Sorry, I have a question. Yes. So we mentioned tensor products, but can you tensor product those things only when non-factor is finite dimensional? Or you have also some completed tensor products when? No, everything is finite dimensional here. I mean, I'm not out. Ah, because of this, okay, because you are working with modulo mk square. Oh, yeah. Then you need mod mk square in the, in the, otherwise, indeed, this is finite, this is infinite dimensional, but mod mk square, because this is the dual of an admissible representation. Okay. Okay. This is finite dimensional. And, and this tensor product is, ah, okay. This is also finite dimensional. Yeah. Everything is finite dimensional. Okay. Okay. Okay. Basically, till the end of the talk, except the very last, the two last two slides, everything will be either finite dimensional over F or a finite rank over the V vectors. Free of finite rank over the V vectors. So this is what I need to know here. Ah, I'm going to do here. I'm going to lift the scale representation as a free WF module with a continuous action of k, which reduces mod p to project k sigma, but mk square. So it is easy to lift project gamma sigma because actually there's a unique representation of gamma lifting project gamma sigma as a free module of a WF. So this is, you know, this is an old result due to Broward, I guess, which you can find in the search book. So what's the representation linear, the group, for instance. Okay. It's also easy to lift. Sorry. Oops. No, no, I don't want. Okay. It's also easy to lift the algebraic part. As a, as a, so here, this is a representation of gamma. Here, this is a representation of k, not of gamma. It is not even smooth. It is algebraic. So I lift V2 tau as V2 tau tilde, which is seem to have two copies of the vectors and there's a twist by the determinant. And to make k, to make k act on this, I need to fix also an embedding. Say, okay. K is the two of all FV and all of the embeds into it, which is the ramified embeds into WF via the embedding of FQ into F. Okay. So this is, and here's the first thing one can prove. So I need a comment. If you take this tons of product here, just as it is, forget about the one over P one second. And if you reduce it much P, then you get the tons of product. This one V2 tons of proj gamma sigma, which is a direct sum of proj of these projective here. Okay. We are not, we do not want this. We want to find Qtow. So Qtow, he's, is, you take the same projective envelopes, same cell, except that you put an extension in that, in that order. Okay. So it turns out that there is a lattice when you invert P in this finite dimensional vector space. There is a lattice, which is not the tons of product of these obvious lattices, which are not a lattice, but which exists, such that when you reduce it, might be you exactly find this non-speed extension in the right order. Okay. This is the first result we prove. Here. And the second result is that now we, we, from this, we can get a lattice lifting proj k sigma, but mk square. We take the following kernel. So we use this lifting here, this sort of power lifting. We map it, we reduce it much P to proj gamma sigma. And we embed it diagonally into F copies of proj gamma sigma. That's for the definition of this map on this direct sum. Now the decision, the definition of this map on this direct sum is just that L2tow reduces mod P to L2tow mod P, which surject onto proj gamma sigma. Okay. Because here the suggestion is here. And so for each embedding, you map it to one copy of proj gamma sigma. You have F embedding. So you have F copies. And you take the direct sum of these morphisms, and you take the kernel of this. Then, so here, this is free over WF. Here, this is in characteristic zero. Okay. So this is somehow a lattice inside this thing when you invert P. And this lattice mod P is exactly the projective envelope of sigma with mk square. So we are going to apply the patching pointer to all these guys. And indeed, we want to prove that m infinity of L is a click. If we do this, we are done. So we know that already by previous work of Daniel Le, Stéphane Mourabien, Germain Schran, and Yon Chanoua, that m infinity of this proj gamma sigma is cyclic. Proj gamma sigma, the lift of proj gamma sigma. Remember, I mean, we are in the, there are other work on this, but here, I really remember that we are in the semi-simple case for our V bar. I mean, sigma is a cell weight of a semi-simple representation of Galois F V bar mod Mv. And the first thing one can prove is a foreign proposition is that the r infinity module m infinity of L2 tau mod P, and hence, by an application of Nakayama m infinity of L2 tau, both are cyclic, meaning here it is a quotient of r infinity mod P, and here it is a quotient of r infinity. And well, so I don't know how much time I have left. Yeah, I have 14 minutes. Thank you. So let me say that the techniques to prove this are standard with respect to what is already in the papers by Hamerton, Jisavi, Daniel, Stefan, and so on. So maybe I'm not going to insist on this. The techniques are not new. Maybe this is a standard devisage. And okay, let me skip this. So to proceed to the next step. So the next step is the foreign. So remember, we want to, we want to, we are interested in L, which is the kernel of this direct sum to f copies of posh gamma sigma. But before going to L, we are going to proceed step by step, adding one embedding after the other. And in particular, we start with L tau, which is the kernel, the same kind of kernel, except we only take one embedding L to tau. So there's, according to one copy of posh gamma sigma here, and we take the kernel of this. So this is just a fiber product. Here, you've got three WF representation of K here and here, which we know have the mod p have the common posh gamma sigma quotient. So we take the fiber product. And now I will explain why M infinity of L tau is cyclic. So by exactness of M infinity, M infinity of L tau is also a fiber product. M infinity of posh gamma sigma, which we know is cyclic. M infinity of L tau two, which we know is cyclic over M infinity of posh gamma sigma, which we also know is cyclic because this one is cyclic. However, it could be that the fiber product is not cyclic, of course. So we have to prove it is cyclic. And the proof for L for where you add all the embeddings, direct sum of all embeddings here can be reduced to this case by just an induction. So once we know this one is cyclic, we're going to add another embedding. We're going to have another fiber product and so on. Okay. So now I will explain why this fiber product is cyclic. And here we enter the world of gallery representation. So let me denote by rv, which is the r square rho bar. Rho bar is r bar v. This is our local gallery representation, semi-simple. This is the local notary ring parameterizing frame deformations of rho bar in the sense of kizin, laser. So there are no conditions except there's a condition on the determinant that I will forget here. So here's what follows from previous cyclicities that I just mentioned. So first, we have our infinity, which I told you was a full power series ring of a w of f. But in fact, before being a full power series ring of a w of f, it is a full power series ring of rv. This here, which in the particular case here, because of our generated assumption, turns out to be also a full power series ring. But let me forget it here. Okay. So we know that m infinity of gamma sigma is a quotient of r infinity. And in fact, because of this variable, these patching variables play no role at v, we know it is a quotient only on rv. So there's an ideal j such that it is isomorphic to this. Like was for the other one, because we know these two things are cyclic r infinity modules. And of course, same thing for the reduction of p here by exactness of m infinity. It is just rv mod pj. Okay. But in fact, we know what rj, because now we can, we are in specific situation. We know sigma is a is a ser weights of a semi simple robot. And we can compute things. Everything is quite explicit. And in fact, we can prove that rv mod j exactly parametrizes potentially crystalline lift of a robot of any time type type here is Bushnell Kutsco type whose reduction might be contained the ser, the ser weights sigma and with parallel hot shape weight one zero. So it's not I should mention that it's not the kind of usual deformation rings that one usually considers because we it is a multi type deformation ring. I mean, we take all we take several types and not just one usually you fix one type you fix such a weight and you consider potentially crystalline lift of robot with this type and this much it way here we consider all types or tame types, meaning by the way, Tami means they are representational to fq in characteristic zero. So level level zero if you like, whose reduction would be content in sigma take all these types. And we know actually this is exactly the quotient we have. So this is the place where I think moderate these statements are some somehow hidden, because moderate statements are hidden in the support of these infinity modules. And the reason we know this is we derive it from the fact that if we just fix one of these same types, and for these hot shape weights, then it's also the usual deformation range for one of these types, we actually know it is domain. We prove it is a domain. So since we know the support is a very visible component for fixed time type, it must be everything. And now when we put them all together, it's we can derive that we have must have the full check. Likewise, for this guy, except we have cell weights, hot state weights two minus one at the embedding time, which of course are coming from the algebraic parts that the previous story. So we again compute everything explicitly. So it's a little bit more complicated because now we have to deal with such state weights two minus one that is up to twist our shapeway three comma zero. So the computations are more difficult, but it can be it can be done can be done even by hand. And likewise, the single type deformation rings with these high shaped weights are also domain, so we can prove it and oops and yes. And so now if you forget about these extra variables, the thing you need to prove is that this fiber product here, which so you only now consider these guys forget about these matching variables into proof it is a quotient of RV. If you notice, it will be cyclic. So meaning one generator of RV and you will be done. And to prove this, it's easy to see that you need to prove that J plus J to is exactly P comma J. And for this, it is enough to prove to prove that P belongs to J plus J to. So what we know here is that a period J plus J to contains a power of P. But we have to prove that it contains P really and not and not only P cube and so on. So in other terms, this is something like we have to prove that the the potentially crystalline representation here with H8 weights zero and everywhere. And the potentially crystalline liftings here with H8 weights outside two and two minus one at all are as little congruent as possible. And this can be done by hand. And explicitly, we because we can, of course, if we want to prove this, we can check it mod P square. If you prove this mod P square, you are done. And this, this is something you can do by hand. Okay. And this finishes the proof of the main result. We have cyclicity for M infinity of M. So I want to derive one application of this Gelfand-Kirilov business, which, which, well, was sort of nice for me. It is an application to the Piedic Langlands program. So it is based on the following theorem, which is a theorem of Dotto and Daniele, which itself builds on work of Karajani, Merton, G, Gertie, Pascunas and Sheen. And it has to do with big patch modules. So, so far, I was considering, I was patching things like hum, k, hum, k invariant, homomorphism from some finite dimensional V to pi V of R. And maybe the dual of that, okay, which was finite dimensional. And I was patching this with this M infinity. Everything was a finite rank over infinity and so on. But it turns out, you can also patch the full dual of pi V of R, which is, of course, infinite dimensional now. Okay. And of course, this is not anymore, this is something which is a finite generative of R infinity, double bracket, G2 of O, FV, which is k. But this is not anymore finite generative of R infinity. And it has a compacted production of G2 FV. Okay, so that one can do this. And this is done in the paper, a recent paper by Dottwin, building on previous work, but here they exactly do the thing we need for this local factor and so on. And so the corollary of our main result is the following, which was known. I mean, we, it's not new that if we had the GFK dimension, and this would follow, but it's nice to recall it. So take any map from R infinity to O, any specialization somehow of double F algebra, where E is a finite extension of QP, so containing WF. Then the corresponding specialization and M infinity, M infinity, tensile R infinity OE, except you have to dualize back, back. So here this says, I think, shek of dual, something like that. I mean, you have to be careful about duality. And you invert P. Well, this is nonzero. And I mean, then it is a missible unitary continuous presentation of the FV over E, lifting by V of R bar. It is a banner space, which has a unit ball, which is preserved by the G2 action and which lifts by V of R bar. But the thing is it is nonzero. And to prove the idea of the proof is that you need flatness. You need to prove that M infinity is flat over infinity. And then you don't, if you know this, then you know that specialization or nonzero. But this follows from the Gelfand Kail of our main result, together with a result that M infinity is a quen makkoli over the stone commutative ring. So here quen makkoli is in the sense of our slender books one and so on. There's only one X i, which is nonzero, which is a result of G Newton. And this implication is a so-called miracle flatness. So in noncommutative setting. So we have this result. So I think now it is almost time. So it's good. I just have one slide. So I should mention that so far, robot was semi-simple. But we think the case robot non-simple will work as well. And this is actually ongoing work of Yon Shanhu and Ao Han Mong. So of course we need some genericity assumptions, but robot will be non-simple. And finally, one other thing we hope to get. So maybe before I, so we prove that the Gelfand Kail of dimension of this sort of minimal representation of GF to Fv, where we forget about what we have forgotten as much as we could about multiplicities coming from other places different from V. So we prove that the Gelfand Kail of dimension is F. But of course, if you add multiplicities coming outside of V, finitely many, for instance, if you don't take the right, exactly the right compact open subgroup and so on, you will get something like several copies of by V of R bar. But this way this won't change the Gelfand Kail of dimension. So we shouldn't need these multiplicity one assumptions and so on to prove, in fact, in the end, the Gelfand Kail of dimension is F. And maybe we can prove it. In fact, we hope to prove that at least for suitable level K of V outside of V compact open subgroup of the finite add-als of the cross outside of V. So we, I do the same thing as I did at the very beginning to define pi of R bar, but I only take the inductive limits over open compact subgroup of GF to Fv with a fixed prime to V level. And then I take the R bar is a typical part. So this is a pretty bigger than pi V of R bar, many, many copies of pi V of R bar, but we hope to prove that the Gelfand Kail of dimension is still F. So of course, then we have to deal with things which are not anymore multiplicity one. Okay. So I guess, I guess this, I guess I'm done. Okay. Thank you. Thank you, Christophe, for this very nice lecture. So we have time to take few questions. Are there any questions or comments? So when you take the home from R bar to something, yes, a homology, what do you know about this homology? Like, can you tell sub questions which are R bar, but not sub? Okay. Yeah, I don't know. We only take our bars as a sub. You are, you are right. But I don't know about other kind of things you can do. Of course, this is infinite dimensional whether an R bar is two dimensional. So indeed, it could be that there are things of which are not as a sub, but then I don't usually, I mean, in that setting, when usually consider such things and, and we're very happy to be able to prove something about this. Are there other questions? Yeah. Daniel has a question. So he, Yes. Okay. Yeah. He wrote it. Okay. So how about higher powers of MG? So we think we can, okay. So let me go back to the, from this, if you assume generosity enough, let me see, where is the, yeah. I think, but we didn't write it. I think that from 5.0 about MK by some induction business, we can probably get higher powers. We can go, probably go a little bit further by some kind of induction if you assume sufficient generosity, MK cube and K4 and so on. But so far, it was not clear to us that we would gain so much from proving these things. So I mean, MK square for what we have in mind seems to be enough. So maybe in the future, it would be interesting to have higher powers. But of course, in general, you cannot expect, of course, to be multiplicity free, even if you're very generate because less, I mean, you have finitely many cell weights and this is an infinite dimensional representation. So Okay. So I don't see other questions. Do you see something? So, so in the definition of this little one K of the dimension, so you have, you take ratio and it's bounded by. Ah, okay. So, yeah, in the very beginning. So now you know this existence. So, so you can, you can take the limit. So you can use, do you know if it's converges or the meaning of the value? So, sorry, I'm not quite sure I understand the question. So you can, you can take the limit to, we just to end. Oh, yeah. And can you say something about it? No, I mean, in fact, this, this is not exactly the definition we use. We use the definition in terms of a Auslander books one theory, things like that. So we, I don't know. So you are asking whether this thing is maybe just has a limit instead of just being bounded, right? Yeah, yeah. So I'm not going to go. We don't know anything about that. Oh, thank you. Okay. So if there are no other questions. Just another, another thing of the Kirillov dimension. I am not so, so you have some compact module, but of course you can dualize and does it, does it Kirillov dimension comes from the fact that somehow after your dualize, you get finitely generated module over something? I don't know. This is a smooth admissible. Okay. So this, no, no, when you dualize, you get something finitely generated over the, the, the, the, you are saw us this thing. Yes. Yes. And so you can look at the dimension, exactly the sense of non commutative analog of dimension in the terran rings. Yes. And this is the help on Kirillov dimension. Yes. So in analogy with the theory of the Hilbert function and so on the community of case, you expect that, and in fact, in this case, Lazar and so on, you mean there is some theory for those kind of non commutative rings. So, so the question before was whether there is some kind of a Hilbert polynomial or something similar in this non commutative setup for non commutative certain non commutative rings, which are close to be commutative in the sense that you, you have enough here some filtration and I mean, this kind of people saw us. So I think, well, I think, so I forgot a little bit, but I think this, yeah, you know, if you know that the dimension, if you know the dimension, you know, dimension indeed, you know that this must be something like a polynomial of degree in N of degree. So this dimension of degree, the Gelfon, you know, dimension plus one, maybe, or let me see if the Gelfon, if the Gelfon, your dimension is zero, which means that this thing is bounded for any N, which means that then by V is finite dimensional. So this is a constant polynomial. Yes, this must be this. So I think you can prove that this is one of the aspect of Gelfon, your dimension, if I'm not mistaken, maybe you can correct me that this dimension is actually a polynomial for N big enough. It's actually a polynomial in N of degree Gelfon, here of dimension plus one. No, no, but it cannot be because you put P to the N in the denominator. Oh, no, no, no, I mean the, I mean the numerator. No, no, no, but if you write in the denominator, okay, okay, so it's okay. The variable is not N. It's maybe P to the N. Yeah, then you're a little bit more than P to the N. Yeah, sorry, you're right. Yeah, not N, P to the N. Yeah, of course. Thank you for the clarification. Okay. So then we thank Christophe for this nice lecture and thank you for the invitation and have a safe and nice summer vacation. Goodbye. Goodbye. Thank you.
Let p be a prime number and L a finite unramified extension of Q_p. We give a survey of past and new results on smooth admissible representations of GL_2(L) that appear in mod p cohomology.
10.5446/54701 (DOI)
Thank you very much. You've been welcome. So the title is here, L'Oberto de la Resolution. In fact, it will be a very interesting and very interesting and very interesting and very interesting and very interesting and very interesting and very interesting and very interesting And in fact, it will be what I'll talk about is an algorithm of resolution of similarities of logarithmic varieties This is a sort of logarithmic simplification of classical algorithm So half of my talk will be about classical and half will be about the modification let me start with introduction I'll formulate one of many results and also give some motivation introduction So first of all we always fix characteristic to be zero, the only case where classical is known, so we also only work here, what is over Q, we always work over Q. In addition by VARK, I'll denote the category of integral, or maybe locally, maybe the joint union, it does not really matter, integral varieties, over Q of characteristic zero, so for simplicity I'll only discuss this case, and by VARK log, so varieties of the finite type of Q, which are locally integral, and VARK log are the same guys with FS log structure over K and trivial log structure, K star and variable K. So classical resolution deals with the resolution of these things, canonical fontor and so on, as I'll formulate and the algorithm will deal with this. Okay, now let me define maybe first a pure classical, pure M1, which is classical fontoral resolution, it says we follow, for any Z in VARK, for any such variety, Z exists a modification, this propositional map Z res to Z, I'll denote this map F of Z, such that Z res is smooth, so this is indeed a resolution in classical sense, propositional resolution, the source is smooth, and in addition, this is about funtoriality, F of Z is a fontoral, or compatible fontoral, for smooth, for all smooth maps Y to Z, where this F of Y is just pullback of F of Z to Y. So funtoriality, maybe just to mention few names, I don't know, try it on the board, but this was proved without funtoriality, just existence for any Z separately, by Khrulak in 64, when it was done canonically, but is at least compatible with all the morphisms, canonical was done by sort of, I think in the beginning of 90s by Gyrtl Mimman and Bill Major, and later Vlodakshik in 2000 actually showed that in fact this is even better, this is funtorial false smooth morphisms, there are many different descriptions of this algorithm, but it's always the same algorithm, so far, and in fact the algorithm I'm talking about, it also applies to the case when Lodakshik's trivial, so it also recovers this, and it gives up in you, so I think I remember from some references, sometimes it goes that the algorithms in the 90s, like the ones Bill Major offers, it was not exactly the same, that is there are different steps, they are absolutely minor, but these are some neutral parts, you can do them a little bit in different order, or you can be not sure what it is, okay, and you can do it twice or twice instead of once, and this is more or less with only difference, so to large extent it's the same, the same algorithm, the same engine, the same invariance, and it was also insane as I think, and Bill Major, yeah okay, let's discuss it in questions, yeah, and now, theorem one law, logarithmic modification, this is by joint project with Abramovich and Lodakshik, and everything I'm talking about is in the framework of this joint project, and it says we're folding, Poreny z in logarithmic where exists a modification, z res to z, I'll denote it f logarithmic of z and the origin is Voxmoos, and f of y equals f of z product with y over z product in the s category, so this is not just pullback, it is fs pullback we are working here, for any Loxmoos, y goes to z, and this Loxmoos functionality is much stronger than classical one, for example we can extract roots of exceptional devices, with this we can cover branches, covers branches of exceptional devices, and it's still a cummer cover, it's Loxmoos, so we get completely divided with this, so the main strength of this result is that it goes for Loxmoos, okay, and what do you mean by the modification, it's just on the level of modification, it's no restitution on log structure, it's just modification on level of, it's just modification on level of underline, right, but your condition of locally integral is not stable under general local models of Loxmoos things, because you can have something which becomes, at least for normal thing usually is the same, but for just locally integral, if you take tens of zp, zq, it's not necessarily, so even just extension of the field of scars doesn't preserve locally integral, so it's, I don't know if it presents the problem in the, it is definitely not a problem, in fact our theorem is a little bit more general, I just, I don't want to go to equidimensional and so on, just let me keep it, yeah, I allow for myself to cheat a little bit with two stages, yeah, and the main cheating in this theorem is not what occurs, it's saying, the main cheating is that, in fact, zs is what we call a toroidal orbital, that is, it is a dilemma for stack with finite diagonalizable in FC, so to get this stronger funturality, we are forced to go to much more wide category, and as far as we understand, there is no way to get full funturality without passing those taxes, we have no idea how to do it, and we suspect it's probably impossible, okay, now one more remark is that this map, zs to z is an isomorphism over the loxmuth locus of z, so if I'm given z res to z, and here I have z loxmuth and over-suspecting where this guy is loxmuth, when actually I'll get isomorphism, this map immediately follows from funturality, because first of all, this morphism should induce resolution of z loxmuth, and z loxmuth is loxmuth over point, resolution of point can be only a point, so just I have no choices, funturality immediately implies that I'm not touching the whole locus-variations of z with the result, okay, good, now maybe I'll just say one word about motivation, our motivation is to result morphisms, let's denote them z to b by modifying both, with this we would like to get to find modifications of the base and of the source, so with this map is loxmuth for lodging out for appropriate, for lodging out loxstruction b' and appropriate loxstruction z, now it's known with just what's usually called semi-stable is impossible if the base is of dimension 2 and more, and the relative dimension is 2 and more, but semi-stability is an initial thing to expect with this possible, and in fact such a theorem is known in non-canonical folk way, it was first proved by Romesh Karo for Variety's and in a work on Travodegabra with Lukluzzi we improved it to quasi-exanus schemes of characteristic zero, or if it's a large class of quasi-exanus schemes, but it's not factorial, and because of that it modifies with whole generic fiber, it can modify generic fiber even if generic fiber is smooth, so this weak version of resolution of morphisms does not imply semi-stable reduction and over-versionary, and one of our motivations was to improve semi-stable reduction of general work with space when raised with characteristic zero, and the only curve which I see is to prove phontorial resolution of morphisms, and it must be phontorial for loxmuth, it must have stronger fontorality than classical one, and actually the algorithm which we discovered now should be the generic fiber, what's going on in hypothetical relative algorithm over generic fiber so what is the non-phonorial thing you're thinking about just not alteration but modification modification yes, okay let me just give you a trade, maybe I'll say one word but we'll never see it again yeah it was just for to motivate, the point is that we resolve indeed by modifications of both, we allow any modification, I don't care much to be, we allow large enough modification of it, and after large enough modification it should be just depend on pullback of z, in a log smooth phontorial way, okay I'll proceed good so far this introduction now let's go to classical methods, so now I'll tell about hironaka's approach and developed by all the other people I mentioned, so two classical methods okay so let's start with principalization so hironaka reduces question of resolution to be followed to be followed in question, so I'll just say okay maybe I'll just say the words, the idea is as follows first of all we want to prove something phontorial so it's enough to construct our algorithm locally because of canonicality it will glue automatically, locally this z can be embedded into a smooth ambient space and up to itamorphisms this embedding only depends on the dimension of the ambient manifold, so in principle there is not much choices here, so it's more or less safe to assume that we are given a canonical embedding into something smooth ambient manifold and we would like to work on this smooth manifold, in such case we can work with z or we can work with so embed z into x, this is a manifold that is just by this I mean smooth, something smooth and work with the ideal which defines the z inside the restructure, so and we want to achieve something which is called principalization of this I now let me give a few definitions, so first definition boundary on x is a strict normal protein divisor E inside x, so even if we start with empty boundary it will appear after blowing up when we will start to blow up x, this is very crucial for runacosmetic, second we say that inside x a closed subscreen has simple normal crossing with E if locally there exists coordinates t1 up to tn at the point such that v is given by vanishing of t1 up to some number tr, so in particular it is smooth and E is also given by coordinates but maybe some different coordinates, maybe few of them from here and few of them not from here, so E is given by vanishing of t i1 times t i2 t i s, okay so namely I guess it's very simple we can just pick a local coordinate so with all our data is given as coordinate as linear planes with respect to these coordinates, good and another definition we say with i admissible blow up is the following datum x prime e prime i prime going to x e i, this is just for momentation I don't have time to define category of triples so this for us it will be just for momentation where x prime is blow up of x along some v, v has simple normal crossings with the boundary we started with so we are only allowed, admissibility means that we are only allowed to blow up something with normal crossing and in addition v is contained in the vanishing local supply and also we are only allowed to modify the local square i is not trivial, e prime equals to let me denote this map to x by x, e prime equals to pre-image of the old boundary we call this old component union with pre-image of v which is new component and i prime is just pull back or fine so the ideal is just pull back now exercise x prime is smooth, a manifold and p prime is a boundary just because our definition immediately allows to go to local coordinates you just compute there with the local inverse okay okay now theorem 2, principalization says that for any such a triple x, p, i where exists is a sequence of admissible blow ups i'll denote it x, n, e, n, i, n such with this guy is sort of result, what do i mean result? that means we call it i, n is invertible and monomial so with this vanishing local of i, n is contained in e, n so after this procedure we manage to make the ideal to be just a product of few components of e, n with some multiplicity okay and now maybe just remark why this theorem is stronger than theorem 1 so if z is vanishing local of i in this situation and this start with empty boundary then let z, n equal to empty set goes to z to z, n minus 1 and so on to z sequence of strict transform i claim that the total strict transform is empty why because the pullback of i is supported on the exceptional boundary so it necessarily means that and we started with empty boundary so it means that some pitch strict transform was killed and then take maximum i such that zi is not empty easy to see i will go to detail but it's really easy but in such case zi is just the i, i is central and hence zi has simple normal crossings with zi and this implies that first of all zi is smooth but even more it implies that the restriction of i is bounded is simple normal crossings so in a sense this theorem implicitly gives indication that you are not just resolving z you are resolving a logarithmic scheme you solve the problem you resolve the logarithmic scheme zi and the restriction of the boundary so even in classical situations there is a flavor of logarithmic things and another maybe at this stage i'll say the question what the boundary is so usually it is when i started to deal with resolution i thought it's just some nice subscene and it's not good to think about this as a subscene because the reality is wrong there's no map from e n to empty to e which is empty so this is not this morphism they are not morphisms of pairs if you want to view this voluntarily we should think of e as a log traction we should consider the log traction associated with the simple normal crossing divider and then indeed we have such a map from between logarithmic schemes and on the gravity of the ideal is the direct one okay good now ah maybe yeah i'll just say it without without writing on the board moreover we can a little bit it was not written anywhere but one can prove a little bit stronger result instead of resolving z with empty log structure you can consider z with a different falcian log structure log structure whose monoids are free any such log scheme can be embedded by strict closed immersion into some pair x e where x is loose and e is simple normal crossing divider in such case actually you'll get a strengthening of pure remand to log schemes with divine falcian log structure so it was very natural question also if the classical algorithm which in fact results in infalting log schemes can be extended to the algorithm which works with general log schemes at cost of working here with general log smooth log log right and this is indeed what's done in our way so this is in other motivation sort of absolute orthogonal to the first i started this to construct logarithmic extension of this model good now next topic is order reduction okay so main variant of the algorithm is order of i at the point okay i gave a lecture a little bit before i stated this a little bit differently but again i allowed myself to cheat a little bit order of i which is which is just a minimum over all f inside the stock of i at x excuse me minimum of order at x the f belongs to the stock of i and order of the function on a manifold is just the order of minimal non-venician term in table series for example so what did you write main invariant f at f what is after okay he's order of i at x what is f f is a valverie let me say okay okay okay i didn't just do okay okay now let's give a definition mark or although maybe weighted i would definitely prefer weighted but people calling you know you weren't marked so now it's always marked ideal is just id where i is ideal and d is a number which is like one zero and it will be just form a combination of but so so i don't pretend to give any deep meaning to this but i will consider either a singular locus of such guy as all points in the vanishing locus of i such that the order of i at x is at least d so we would like to reduce d so essentially we look at places where the order is at least some number and formal we'll walk in such way and in addition we say with a blow up let's say inadmissible so such guy is called an id admissible blow up if v is contained in id singular that is everywhere where we blow up the order is at least d and f is i admissible in real sense but this we blow up the multiplicity is d and the the center has simple normal crosses with the boundary okay and moreover we define transform for a such if we define transform i prime which is smarter than what we have done before before we just took pull back but now i say let's take pull back and let's divide it by this power of a new exceptional device what is i take ideal of v pull it back this is the ideal of the new exceptional device and take you to minus d so here there is an e or boundary or not in the definition of mark in the definition of mark ideal there is no there is no there is no d okay e is not is not okay in admissibility i require to be simple normal crossing with the boundary yeah when i say what it is i admissible it means that it's simple normal crossing is okay so such a thing is defined only because of the condition that my center is inside d multiple locus it's very easy again simple exercise check that this is defined that we can divide by this okay and then if you're in free it's called order reduction of mark ideals and it says this following it says with any mark ideal x e i d possess sequence of d admissible blow ups i will denote them x prime e prime e prime d but it's a sequence it's not a single one it's a sequence and each time we transform ideal by such a root such that this guy is resolved its order is known in d again as in theorem 1 or theorem 2 this sequence should be factorial and not spelled without but it's natural funcality for arbitrary smooth morphisms between initial that and maybe i'll just mention that theorem 2 is just theorem 3 for d equal 1 so theorem 2 is just part of the case and it was all that point doctor for reasons it's much much much more convenient to prove okay now in a logarithmic setting if you get to some details it will be the same structure we'll have fewer in one logarithmic theorem two or if we can see your freedom so here x is smooth always it's manifold okay it's a manifold is a boundary i is any ideal and after reduction of theorem 1 to theorem 2 we only work with smooth things and ideals inside there we don't see any non-smooth geometry okay all singularities are encoded in the ideal which is just a great work so if they get a zero you just blow up everything and get the mpc yes yes yes and induction this sometimes this happens so it can happen with the order if you ask it now when yeah it can happen with the order is infinity for ideal zero and indeed in such case you just should blow up everything and in logarithmic setting we'll have what infinity not only for zero it will be more interesting but okay now next so let me explain a little bit how one proves theorem 3 so if i can equal yes equal one then i prime is the three major of i or but you each time you subtract one copy of exception divisor okay and you want in vent to get to empty guy okay if he's one when resolving it means just with its focus is empty so by subtracting each time just one copy of exception divisor you invent manage to completely resolve it and obviously this implies theorem 2 because it means that your public of your initial ideal becomes just union of exception okay so maximum contact and induction on the dimension okay so what i said so far actually is characteristic free this is an indication that it's not that deep yeah but maximal contact is the first notion which you really mean okay so my miracle is that maybe i'll say that this is my miracle is that in the maximal other case that is the case when other of i at any point x is less equal to where is a reduction to small dimension okay and it goes as follows the problem of resolving i and d is equivalent to the problem of resolving ideal co phi called coefficient ideal restricted to h called maximal contact hyper surface and we all this defect work so uh the induction on the mention runs as follows we replace our data by some other data and any sequence which results we knew that there is also all the data in vice versa so the problem is sort of equivalent we encode everything which we had about original question into a question which lives in dimension one less not the way we can run induction and this is what you're an hacker did on idealistic this is a there is a big exponent it's more about definition of market ideal okay this is okay but anyway idea maximal contact was formalized by zero but ideas were in furonaca but you know it took a lot of time to refine them and to understand what is real in gene self-contained in gene yeah and it turns out that there is an odd reduction marked ideals is the minimal self-contained block which can prove itself by induction though the rest was sort of unnecessary in original but okay okay anyway okay now main example just to illustrate how this works so let's assume that i is just given by t to some power d plus a1 td minus one sorry a2 td minus two plus a d where t is t1 top coordinate and a i actually depend on t2 up to t on the other coordinates yeah for example locally we can write some Taylor series which gives something like this so let's assume it would be a given situation just particular case of other uh and then the maximum content is just finished a lot of so and coefficient ideal is just the ideal generated by a2 d factorial over two ad d factorial over d so it's just generated by coefficients but in weighted weight we should take each of them with correct weight and it's more or less here if the weight of this guy should be two and the weight of this guy should be d so we weight them approach and i'll just show one thing i claim that i d singular it is the play the points where the multiplicity is d is the same as c of i restricted on two page d factorial singular but it's the other obvious guy is d even only the order of this guy is at least two the order of next guy is at least three and the order of this guy is at least d and this precisely means what is it so at least uh equality of initial singularity lots of it's clear so uh uh i did missable block is any block who centralized here it's the same as what which is admissible with respect to new data now it's much more difficult but possible to prove it this miracle persists after any admissible block so after first block again we'll have equality of singularity lots of transforms and so on and you can also ask what where is a1 so the answer is that because of characteristic zero i don't need a1 i can always get rid of a1 and this is the case why i can take this if uh not reveal a1 completely okay good now let me say uh uh a few words about uh general case yeah now what i wrote now is sort of very coordinate dependent and i would like to construct something canonical something coordinate independent so this is achieved to large extent by derivations so one considers d just ideal of derivations on x over k yeah it's generated by block is generated by dt1 dtn okay and one defines d of i to be equal uh let me say d of ideal generated by let me let me say i is generated by a1 fm then d of i is generated by f1 fm dt1 f1 dt1 fm dt2 and so on so uh derivation of ideal is generated by the ideal itself and all derivations of elements inside and then we can iterate and define uh iterated derivation of good and uh when we can encode almost all basic uh uh tools of the algorithm which i described so far by use of derivations so it goes as follows first of all order of i at the point x is just minimal number d such that the this derivation of ix is trivial so derivation just reduces order by one it's a very simple exercise in local coordinates okay two uh maximum compact is any h which is v of t where order of t is one that is this is a really smooth guy t is a coordinate and t is contained in d minus first derivation so we know that d minus first derivation has order one yeah because order reduces by one each time and so it contains elements of order one and these are naturally also this is very intuitive but such an h has maximal contact with our initial problem it's as close to initial problem as possible and free uh coefficient ideal is just weighted sum of derivations of i to power d factorial and d minus i so it's just weighted sum of derivations okay now i managed to define order in coordinate independent way i managed to define coefficient ideal in coordinate independent way i have a choice of t here and this choice is really here they can the only real issue to prove independence of the construction is independence of maximal contact this really was an issue and it's solved in few ways but no secret so in a sense this is maybe one of main technical problems in constructing our work okay good i think uh i'll okay maybe just one remark complications what are the complications of this method this as i see them yeah because of time restrictions i i you'll have to believe me i i didn't give enough details that you will feel it but so first of all complication one is that e can be non-transversal to h in such case i cannot restrict my problem to h because restriction of e to h is not a boundary to solve this one actually first of all takes care of the of the old boundary new boundary will not pose a problem it will be always transversal to h but old boundary is a problem and one actually solves uh first of all it resolves i the order of i reduces order of i uh it places there there is a maximal multiplicity of the old boundary so there is a secondary invariant already in genonakis paper multiplicity of the old boundary and because of this the invariant uh of the algorithm looks as d1 s1 d2 s2 and so on where this is because order we start with the order of i or maybe order of its non-non-non-non part and then the number of exceptional divisors through the point after with the order of d the order of coefficient ideal restricted to maximal contact and again number of components of exceptional divisor and so on good and second i'll just say by worse in principle it would be much better to work with logarithmic derivations because all formulas even what happens to derivations after admissible block where easier for logarithmic derivations and not for usual derivations there is one point where use of usual derivations is completely critical and it is for the definition of the order our definition of order does not separate coordinates corresponding to exceptional divisors and other coordinates so we do not separate different logarithmic derivations they are different with respect to usual coordinates of logarithmic class but the order is completely not sensitive in a sense all combinatorial complications of the classical algorithm those which i put down the verac are actually because of this non-separating of two types of variables exceptional and non-exception okay good now let me start so what is this complication i didn't understand about this complication so if the maximal contact hypersurface is not transversal to the boundary when i just i cannot i cannot use this edge i can restrict i but i cannot restrict the boundary because i cannot restrict the boundary i cannot guarantee that the i can resolve i with empty boundary on this edge but it can be non non non non admissible it can be non so not simple crossing two edge but if i always blow up also in the maximal multiplicity locals of the boundary when i automatically have simple normal crossings this is wrong okay okay let me yeah yeah i guess everybody else is of enjoyment but yeah i was playing i was playing you like that but just right okay good now let me come out okay so main idea is everyone look just replace everything okay by look for example instead of xe just consider any log manifold x which i'll sometimes maybe write as x and mx just look smooth great or sometimes we can represent it as xe but this time is just x minus the severity locals for the work structure okay good in addition replace d by logarithmic derivations by d log and replace other by log order and so on so just put log everywhere you can okay so let me start with such a procedure okay log order of i on a log manifold is just minimal d so with logarithmic derivation of this top of i becomes 3 so this guy belongs to n and infinity yeah it can be infinite it may happen that we never get but already in classical situation this was the case for zero ideal and only for zero ideal here it can happen more frequently if you take any monomial the naming log derivation just multiplies it by number yeah all monomials were eigen functions of log derivations so log order of any monomial is actually infinite so there are a lot of guys open here at other but this will not be a problem in a sense this gives us this separation of two types of coordinates we have usual coordinates of what about and we have monomial coordinates which have infinite order you should not deal with them at all by use of derivations or by use of the classical immigrants they are completely in combinatorial side of algorithm so we'll get this distribution between combinatorial parts coming from log structure and geometric part which is maximal contact okay well again just put here walk and you are done coefficient ideal maybe i'll put here logarithmic coefficient again put here log and you are done four as i said this time x is any log made for five okay i thought to go to another board what about e the boundary boundary is what i wrote there e is just x minus x three so log manifold is a fs smooth or what is log manifold yes fs smooth all all yeah yes log smooth log smooth log smooth okay maybe i'll just give a six local let me see local picture uh completion uh formal completion looks as follows it looks as k you are joining some of it and also you are joining t1 up to tn so we call these guys regular coordinates we have other one and we call everything here the normal coordinates and this time i don't have any other formal coordinates all monomial coordinates are equally good for me and uh for example my algorithm should be compatible with taking extracting roots of monomial coordinates so i have no chance to have any any reasonable order on the normal coordinates we're just a monotonal part of the equation okay and uh j d equal to the initial values of j is admissible so if j is over four t1 up to ti let's say r comma m1 comma ms where these guys are just every every monomial so now we are allowed to blow up any guy for this form so uh real t1 tn is a log submunable and this guy is what's is a monoidal uh separate and uh uh no restrictions on monomials so uh obviously uh even if i start with something smooth after doing such blow up i immediately get something which is only log smooth but not smooth and exercise again exercise check the blow up of x if need is log smooth again a very simple exercise but i just only should clearly suffer so blow up and then suffer okay uh uh now i i should go to conclusion so um so it is t i1 ti something not not your j is t1 up to t few t's and when someone else okay the indices are not what you want no indices means is r and yeah yeah yeah okay yeah yeah i want why yeah yeah yeah yes okay and uh now uh in fact uh so uh we prove theorems uh two log and three log with respect to this notion of order and uh obvious uh generalization of uh marked ideals and uh the by the same by the same procedure and uh um uh it even uh the main variant is just d1 d2 dn where di lives inside n and infinity so the algorithm simplifies we don't have to separate from the old boundary uh we only have usual uh passage to maximum contact and that's it but we must explain how but plus one more stage which we call monomial stage which happens if d is in let me just illustrate how this may look like this may look like if i is generated by functions like m i j t to some power j by elements uh which for example informal completion look like this and all these m i j are non-invertible in such case the other is in the uh in this case just blow up just blow up the ideal generated by m i j by monomial coefficients just blow up this guy and may transform with respect just subtract the exception device what you get will be ideal of finite order and after that you can run the usual so m i j are well are in the here are mononial are mononial coefficients of uh usual coordinates in yeah i i i i'm i'm working in in this yes we have six in the mononime uh okay times uh powers of the things yes yes and and you just form ideal monomials this is where this is in fact this is the minimal ideal monomial ideal containing i this is invariant definition yeah if you want invariant you just consider minimal monomial definition containing i blow it up and you get fine and uh i'll just say very worse yeah unfortunately yeah we are completely out of time uh the real complication yeah which i wanted is that uh sometimes the algorithm insists that you must blow up something like m to one over d for invincibility reasons and because of that in fact uh one such thing appeals you must go to kumer italic apology and in fact we work not with ideals on what we want with kumer ideals and blow up of such things actually provides stacks so this is the reason why stacks appear and i stop here i think i guess you can start with the linear non-force logarithmic stack from the beginning yes absolutely uh yes uh that's correct but uh i i i think we formulate in this general general to formulate first for the the theorem it's easier to start this this this right also maybe just as a remark if you start this variety you can also want to finish this right no this is a step so after that you can there is also a step how to pass back to varieties but it is less factorial it's only factorial with respect to saturated uh logs this is a state as you have it you have a better statement for saturated morphisms or i have a statement which says that if you start with variety you can end with variety with law with law smooth variety in the process will be factorial for saturated log more and it means not for kumer covers but but how they are working so so in the right in case you need to introduce doing one more step which point you need that this point i need step okay uh well okay so i'll i'll try to very very briefly yeah so the point is as well i i uh sketch one more condition condition i know six seven six admissibility of id admissibility of this actually uh in this case it's not so clear what should be the condition of admissibility because my center is complicated i contain cinnamon but it turns out that the condition is very simple we just want i to be contained in this power of j this is the correct generalization of kernak's condition which we have blown up uh demultiple centers if this is satisfied when after the transform i can divide transform of a public of i by this power of public of j so this is the the correct condition now in case i apply this monomial stage for example uh i want let's take spec k of n monomial coordinate and i have ideal m2 for inductive reasons it can happen with starting from something innocent i i have to resolve something like this in such case obviously i would like to blow up m but j equal m is not admissible after blowing up such ideal i cannot divide by m square what is admissible is j to one half with this m to one half when i can blow blow it up and divide by by uh by the square so here it looks completely as a trick but if i uh if this happens on the h and i start it from some x or x this can be highly non trivial just just one example which completely illustrates this is a fault let's take x equals spec of k x m one coordinate is remember and one is monomial yeah so e is vanishing of n and let's take i equal x square minus m so in classical situation in classical situation it's all these one but all the other is two because m is of infinity so uh so we'll consider v is the i v is other two now resolution says that okay you should go to h which is k of spec k of m and then either and you restrict i and you still have your your strict activation you have to multiply so on h you must blow up m to one half and on x you must blow up something like x comma x to one half this is with center you must blow up now how can we understand such a thing but there are two ways first of all we can try say okay let's try a trick let's try to blow up x square comma m problem is that what we get will not be lox moves uh we insert the bits in grarity and this will be very difficult to control so blow up of this guy is not smooth not lox moves not lox moves but instead of this you can say okay my algorithm is factorial for lox moves covers so instead of working with x let's go to y just adjoin a root of root of m here we can easily blow up yeah so let's call this n and when my yield is x square minus m square it's very easy to resolve idea x square minus x square just blow up blow up x comma n we get by prime this is zero two cover so we can now divide this by zero two and that's what we get here if i just divide as a course model space i get the bed blow up i describe here so i cannot divide back and we skip the log structure but i can divide this as a step this guy's working and this is the sort of blow up what we call kumar blow up of x and one half so it is a stack for the log kumar log entrapology no it's stack for usual though it's enough to work with usual topology we don't have to go stacks are just conventional stacks the ideals ideals for kumar entrapology yeah it's a little bit confusing but again we wanted to keep everything as simple as possible so for stacks it was possible to work on it is dealing with yeah but the y to x original one is not an etal is a tal kumar entrapology stack quotient of why but okay okay maybe we should discuss no questions from him okay so you can serve the speaker again
The famous Hironaka's theorem asserts that any integral algebraic variety X of characteristic zero can be modified to a smooth variety X_res by a sequence of blowings up. Later it was shown that one can make this compatible with smooth morphisms Y --> X in the sense that Y_res --> Y is the pullback of X_res --> X. In a joint project with D. Abramovich and J. Wlodarczyk, we construct a new algorithm which is compatible with all log smooth morphisms (e.g. covers ramified along exceptional divisors). We expect that this algorithm will naturally extend to an algorithm of resolution of morphisms to log smooth ones. In particular, this should lead to functorial semistable reduction theorems. In my talk I will tell about main ideas of the classical algorithm and will then discuss logarithmic and stack-theoretic modifications we had to make in the new algorithm.
10.5446/54702 (DOI)
But I will speak about today's loosely intertwined with the Hallermann Lectures I'm currently giving here. So I want to talk about the geometric-sataki equivalence in a certain setting. And I don't want to claim that I can prove everything, but I want to say how one can get the handle on some key geometric step in the proof. But let me first of all recall the classical setup. And so let's say, let's say as little k is an algebraic closed field with simplicity. And g over k, some reductive group. So then you have several different functions. You have the loop group of g. This is this function which takes any k-algebra to all two groups. I think r to g of the wrong series over r. And in there you have the positive loop group of l plus g, which is also a function of two groups. It takes a power series by its points. So these are both, say, FPQC sheaves. And the l plus g is actually an, now we lost everybody. Okay, maybe just a very brief, not so much as happened. So I want to talk about geometric sataki and I started by recalling the classical setup where I work, say, over an algebraic closed field and have my reductive group g. And then I have the loop group of g, which takes any k-algebra r to the group, which is the wrong series value of the points of g. And inside there you have the positive loop group, which takes us to the power series value of the points of the field. And then l plus g is actually an affine scheme but of infinite type. So it's the inverse limit of the joint T mod T to the n-value points, which are some restriction of scalars along this. And l plus lg was some kind of in-scheme. You could write it as an increasing union where you bound the poles of this wrong series. And then the effingrass-manion is the, well, let's say it's the FBQC stratification. It's not really necessary to do FBQC, you could just do it all of the loop group as a positive loop group. And actually there's a direct way to say what this is as a functor. It's just classifying the following data. It classifies g-torsors, what was my name for the g-torsor? I didn't give it a name, let's call it E, over the spectrum of the long series, of the power series field. Plus the trivialization of E restricted to the puncture. So if you trivialize the torsors and giving such a thing as just giving the trivialization over the long series field, which is an element of the loop group, and then you mod out by the ambiguity of taking a trivialization over our power series t. And so maybe also if g is g l n, this is something pretty explicit. It's just a set lambda, which are some kind of projective. And contain a lattice, it's okay. Okay, and so the basic theorem about the geometry of these guys is that, it's what's called a strict int projective scheme. So strict refers to the fact that the transition maps are closed immersions. And maybe I should have said that you also have an action of the positive, well actually of the whole loop group, but in particular of the positive loop group on the grass mania, which is clear from the description as a quotient there. And the l plus g orbits are in projection with four characters of a maximal torus and an intake dominant guys. So they fix maximal torus and the borrel and g via mu maps to mu of t. So mu of t is an element in the loop group of g. And in particular reduces to the point in the F n grass mania. And for, so you have the open Schubert cells, the root of g mu, which has this l plus g times this point mu t. And usually one takes a reduced closures of these guys, it's called a reduced bar. Well actually, I mean, it's not the open Schubert cells by a nod and then the non open guys just this way. So there are some projective varieties, usually singular. Projective varieties. And okay, and so this whole thing is the union of these guys along close to emergence. And then there is a geometric sataki equivalence. So what's the complex numbers? Then we look at which and we learn and then there's some other papers maybe by Tim and Richard and Jim Van Ju, which discuss the case of other fields. So, maybe use an export to say this. So you can look at the category of purpose chiefs on this F n grass mania and you look at the l plus g equivalent guys actually. I should have switched those words, purpose, g equivalent chiefs. It's better if you exchange those words on the F n grass mania. And actually what Markovic and Villoni would say, but which hasn't been taken up by Richard's use so far I believe, is that if you do this actually with the L coefficients. So integrally, so Richard's and Ju they work with the QL coefficients here. It should also be true with the L coefficients. And this is equivalent to the category of representations of dual group representations on fancied generated. So in particular it's true with F L coefficients. It's also true with F L coefficients. Where the g dual is a Lengens dual group. How does this work? So this works in such a way that if you take a highest rate representation V mu here, which can, which doesn't make sense integrally, then this corresponds to an intersection complex mu here. Where mu is dominant core character for T, but that's the same thing as a core character of the dual torus. And so such core character say parametrize these, well, when a character is zero would be the reducible representations of high rate mu. And then you have this relation. L is not equal to the characteristic of K. So the g dual is discrete. Yeah, I mean I'm working on one algebraic closed field. So anyway, yeah. So the topology of this when you work overseas and you change, so if you, some of the topology relative to constructively like ZL, this shouldn't change when you pass to characteristic P different from L. So you wonder why the statement is independent of the choice of the field K. Is that the question? Is it easy to say that? No, it's not. I don't think it's a topology that if you vary the characteristics. So if you put a family of affegrass manians over, over spec Z or something like this, that you get the same category in all characteristics, I don't think it's uproarly clear. I think it's not clear that you get the same answer. Uproarly. They are cell decompositions and resolutions, but I mean there is some finer structure like these. Okay, I mean this is not just the equivalence of categories. There's some extra structure, asymmetric monoidal structure and getting all of this extrovariously and there's some work to do. But what lattice is it in this highest weight representation? Well I think there's a unique one which has the highest weight integrally, where the highest weight generates the whole representation. If someone takes the highest weight, I mean there's up to scaling a unique one, just the scaling doesn't matter, so you pick one and then you take this, this is the lattice generated by this. I mean also, if I work integrally I have to be careful what I mean by perverse because I do dual perversities and I need to specify which one I take and you can look it up and look which one you don't, I might get it wrong if I try to say it. Alright, I think that's what I wanted to say. So this is an equivalence of symmetric monoidal categories. And so it's clear what the symmetric monoidal structure is on the representations of the dual group. It's not as clear what you do here. So here's just in the product. So what you have to do here is you have to, there's a convolution product. So if you do want the monoidal structure, you can define this as a convolution product. So essentially this uses that if you take this Lg mod L plus g, well, if you take some kind of L plus g equivalent thing, then this has a natural map to Lg mod L plus g. And so if you have two such perverse chiefs, which are also L plus g equivalent, you can make one on this big guy because locally this is more like a product of two copies of this and then you push it forward and it is made. But this is not clear that this is symmetric. It's not clear that it's commutative. And so to get a symmetric monoidal structure, the usual argument actually proceeds by giving a different construction of this convolution product, namely it uses a so-called fusion product. And I want to talk about this later in the setting I'm in. But let me only note for now that for this you have to work over several copies of the base curves. The base curve in this case is something like k power series T. So in this case you have to work over the power series being in two variables and then specialize to the diagonal. So I have to stick to the middle. So for ZL I think the tensile product has also fell once. You expect to have no. Right. I mean so you have to be a bit careful. So if you restrict to some kind of free guys, torsion free guys on both sides and this works nicely, if you don't then of course on the category of representations there are the two one terms. And so you wouldn't expect that the convolution of two perverse things is again perverse but there might be some extra kind of two or one term appearing which you would have to neglect again. So I want to talk about a version in periodic geometry. So we can tell something like replacing k power series T by maybe the width vectors of k. And so this can be done and that's the theorem of sigma and u. So let's say g is over the width vectors of k where k let's say again it's an algebraically closed field of characteristic p, reductive group. And then you can see the following variant of CFM Grassmonyon. So which you only define on perfect k algebras. Sets which takes r to the set of all p torsion. The spectrum of the width vectors of r which are now in the case of a perfect algebra some nice ring plus the trivialization of e restricted to punctured guy which now means takes the width vectors and invert p. And again you have an action of the loop group in particular of the positive loop group which is the same which takes r to the g of the width vectors of r. And let me just abbreviate the theorem by saying that the similar results hold true. Where the small part of this is actually a theorem of Bogov button myself namely that these guys are that these closed Chewbatt cells are perfections of projective varieties. So Chew only proved that there are algebraic spaces which was enough for him to go on. And so the much more difficult part of the theorem is the geometric satark equivalence. So Chew actually had to use a trick for the geometric satark equivalence because you didn't have an analog so to get symmetric structure you have no analog of such a two variable. So instead he used this I think I think due to Gal fund so the geth is a trick of proving commutativity of these spherical Hacker algebras by using some solution of the group some anti-involution which gave you an anti-self isomorphism of the Hacker algebra which then showed it must be commutative. And there was a way to do this geometrically and that's how a Chew could pull this off. Let's get off on track. This done. So if you have a two and an automorphism of G it literally acts on this on all the categories of that. Right. Okay. And so is there usually the action let us say for in the automorphism action is trivial or canonical trivial or? Yeah I mean there was some subtlety in getting this to work because you had to check some coherences between the monolid convolution structure and the symmetry isomorphism and checking those actually came down to some combinatorial identities which he could only prove because they were the same as an equal characteristic and equal characteristics they follow from geometric Satake. So it's actually kind of convoluted argument. The group is not constant in the. Well I mean in this case it's somehow a constant I mean this is a split group in this case right. So. It's automatic but maybe you see. Why? It could be ramified. No it's a reductive group integrally so it's like like unramified. It's unramified and they work on algebraic closed fields so it's actually a split group so I don't have any issues here. So right but actually that's not the theorem I need in the context of these lectures I'm currently giving. So for Fox conjecture we need actually a version for some B to run plus cross-magnet. So B to run plus is a ring defined by Fontaine and PiD coach theory. So again a complete discrete variation ring as k power series T or the width vectors of k but now the residue field is Cp. And so of course it's abstractly as symbolic to Cp power series T but not canonically so and once I make some functor on some test category it actually is genuinely different from a power series ring. And so say if G over B to run plus is again a reductive group which again must be split automatically. You can do the similar thing so again you have an Fn-grasp-monion for G. But now the test category is kind of similar as in juice setup. We have to restrict some perfect algebras to get a well defined analog of the width vectors so you have to know perfecto it's Cp algebras which by the way I will maybe in a moment conflate with algebras over the tilt by the tilting equivalence to sets which well does a similar thing so it takes it to the set of G torsors over and now there's a construction which takes any such perfecto at Cp algebra and produces an analog of this ring B to run plus but for this family R. So there's a B to run plus algebra which if I turn that to the residue field Cp I get back the algebra R so it's a flat deformation to B to run plus plus a trivialization over what's called subtraction what's analog B to run the subtraction field of this Tvr and this is B to run plus. So you can consider this similar object and again you have an action of an L plus G on it. And now in so maybe actually make a remark how is this related to the previous guy there's a relation between these two. If G is defined over the vertex of K already which it implicitly always is because it's always split and then you can take a split form over the vertex then we can define a family of F-angraves mania which is some kind of Baylars and Duhnfeld type family which lives over something I would call the diamond associated with the vigtures of K. So these are technically these are all functors on perfectoid spaces and characteristic P whereas the fiber special fiber is a bit vector F-angraves mania. So the WK is not. It's not analytic but you can still define this as a V-sheave in the context of my course. So the special fiber is this thing that was defined by Schimvenjou and this is essentially a Schim and the generic fiber meaning over CP is this B-to-1 plus-class mania. And it's there's a more classical situation if you have a smooth projective curve say and it need a projective just some smooth curve and then have a reductive group over it then you can also build a family of F-angraves mania over the curve whose fibers at each point are some F-angraves mania corresponding to the complete local ring at this point. So no need to do sweetheart. So as usual, one can describe the L plus G orbits. And define two-bit varieties. Well, let me put varieties in quotes because they are now quite far from actual varieties. And the main theorem of my Berkeley course was that these Grigimus, let me put the speed around plus here to make clear what I'm talking about. This Grigimus belongs to this world of diamonds that I'm considering. So technically, it's a proper and spatial diamond, proper separated spatial diamond over the diamond for CQ. And so the course so far implies that you can do etallic homology. We have a six-fungta formalism of etallic homology in this setup. Etallic homology for these objects. And so in particular, using some abstract results of Ofer-Gerber, one can make sense of perverse sheaves on these guys. And also L plus G covariant, perverse sheaves. And all right, I should get somewhere soon. I can also define a convolution product. For this, you have to check by hand that the convolution morphism is semi-small, which you can do. So it can also define convolution product. But to get to symmetric structure, well, you could cheat again and use trick, which probably works. But actually, it's even necessary to have this picture with these high-dimensional bases for Fox conjecture. So I really want diffusion production. Do you want some constructability notion here to define? Well, I mean, if the L plus G covariant, it must automatically be constant on all the open Schubert cells, which gives you some constructability with respect to Z. And then, of course, you want that on each stratum there are local systems, I mean, for finite rank, which actually must be automatically constant by L plus G covariants again. And of course, you have to check the operations preserved this constructability. You have to check the operations preserved this constructability. I mean, somebody's gone. It's OK. I mean, the way to check this is that you can define some kind of dimmer 0 solutions of all these guys here, which are an isomorphism over the open L plus G orbit. And then you first check that everything behaves nicely on this resolution, where everything has some smooth locally the same thing as some standard guys you know from algebraic well, which analytifications of algebraic objects. So you can control it there. And then you, well, you just push forwards on a proper map and so on. You can control those things. Let me try to do it this way. OK. So for the fusion product, let me actually assume that G is defined over, let me fix a model over the width vectors, word P. Start with, and let me denote this guy here actually by L. Then in the world of diamonds, the issue was having this two variable family actually go away. So you have, can take the edict spectrum of L, pass to the diamond, and then you have such a two variable guy. It's called two, for two copies. And you can also define some Baylian-Sindrindfeld-Grassmanian over it. Which now in the case of two copies of the curve looks slightly different. It's not at each fiber and F and Grassmanian instead. So what does it parameterize? It parameterizes G-torsors over, so the R-valued points, actually, to be very precise, the R-plus-valued points. G-torsors over the space I denote by this bar of R-r-plus, and then in some sense takes a fiber product over a bar K with the edict spectrum of L. This can be defined in terms of the width vectors. So it's an open subspace. Of this thing you get from the width vectors, or this guy. For the width vectors it doesn't, there is no such thing for just the width vectors. So if you want to have these two variable guys, you implicitly, already for one variable, would have to do this Baylian-Sindrindfeld-Grassmanian, which lives over a small part of your curve. So implicitly, already for one variable, you would need the digital iteration from this width vector F and Grassmanian to the B-dram plus Grassmanian. You don't really see this over a curve because it's basically constant X. What did you say now? This is another approach instead of the, you say that you can define. I want a fusion product. And so for this I need, so yes, I don't want to use, use tricks, the skeleton trick. Instead I want to mimic more closely what's done classically with a fusion product. And for this I need families of this Baylian-Sindrindfeld-Grassmanian over two copies of your curve. And my small punctured, this is like a little punctured disk and I take two copies of it, which now some. So what you order with SPA, WR plus, WR plus, that's on the right hand corner. So what is WR plus is big vector of R plus, so it has, it is an anticring. Right. Was it P comma pi, edit topology where pi is the uniform? Okay, so it is still not an analytic. It's not analytic, but the open subspace will be analytic. Okay. So essentially I look at the open subspace where P is not zero and the type of our pi is not zero. So I look at these guys, but with a trivialization again. From graph of X1 to graph of X2, which I map from to here where X1 and X2 are the maps from SPA plus to SPA L diamond, which give you the map here. And it turns out that giving such a map to SPA L diamond is the same thing as giving an embedding of this edX spectrum into here with some properties, which is in some sense a graph of this map here. So away from the diagonal. Go see the other one. If you have a graph of something, then it is a lot of the source. And you are writing. Sorry, yes. And that's what I mean. Sorry. That doesn't make sense. Sorry. Thank you. Actually, giving such a map is equivalent to giving an unhilled, which I call R I sharp. So this corresponds to. And this unhills automatically embeds into here. Thank you. Away from the diagonal, the fibers are as a morphic to two copies of the speed around plus cross-monion. That's because giving a, like if it's already trivialized away from these two points, then to understand the extension to a G-tour over everything, you just have to understand what happens infinitesimally near those two points. And that's given by some usual bovella slalemma, which has an analog here, by this V-round plus cross-monion. But over the diagonal itself, the fibers are just one copy. Because if you're over the diagonal, well, then giving us more than away from these two guys is just the same as away from one guy. So then you just get one copy. So this picture is actually really close to the usual picture for a curve. And so what you want to do is that if f and g are l plus g, a covariant, which maybe for the moment is not critical, perverse sheaves on the speed around plus cross-monion, then their exterior tender product, the box product, defines the sheave on this Binance and Duhinfeld guy away from the diagonal. So away from the diagonal, this is more like two copies. And let's denote this open inclusion, J, into the whole guy. And what you want to do is you want to canonically extend this perverse sheave here to perverse sheave on the whole guy. And for this, let's assume that f and g are 3 over 0. So look at the box product. This has a unique extension to what's called a universally locally acyclic ULA. And this is a notion I want to discuss in the five minutes or so that remain. ULA sheave on this whole Binance and Duhinfeld guy perse-monion. And this extension is perverse. This is actually equal to the intermediate extension, what's called J exclamation mark star. And the restriction to the diagonal is equal to the convolution product. Where the convolution product uproar depends on the choice of ordering. But this picture is obviously invariant under flipping the two factors, which gives you the symmetry. And so this follows the usual arguments. So this is how one proceeds classically. But one needs a good analog. So once one has a good definition, the right definition of ULA, let me, you can say this universally locally acyclic. So in the monoidal structure, one has also to verify an hexagonal and pentagonal. So there are these high axioms, these high compatibilities. These you check by passing to three copies of this picture and so on. Well I mean, I mean there's a thing that if you try to do the same for the whole derived category, it's there's actually not any infinity monoidal structure on this. It's only two or three or something like this. So this nice symmetric structure is actually very restricted to the purpose of the chiefs. So let me recall briefly the usual definition of ULA chiefs. So let's say f from y to x is a map of schemes. Let's say finite type separated type of schemes. And f over y is some chief. Well it's actually be more precise. It's seven simple p plus. The chief from y is some coefficient. L is inversible on x as usual. Is f universally locally acyclic. And so universally refers to the fact that this should be true after any base change. The following holds true is that for all geometric points y bar of y, so this maps to x, this maps to some geometric point x bar of x. And I have another geometric point eta bar which specializes to x bar. And you want that if you look at the sections of this, that's a strict generalization of f that this is somehow the same over x bar as over eta bar. Meaning if I base change this guy over the strict generalization of x at this point to eta bar, this is unchanged. What is your definition of the direct category? Is it? Well let's assume that everything is a finite type over a field. I don't think that there is subtlety then. No, because it depends on if you use your definition, you can use the boital. Because you're working on this, usually it is the. Let's work with torsion coefficient so that this issue goes away. So it says something that if you have a specialization, so essentially says something like the consequence of this following. Say if f is proper, then actually the homology of the fiber y over x bar of is the same. Then if I take r of lower star f, then this is actually local system. It's locally constant. How do you prove this? Because it's always constructible. It's enough to prove that it's under specializations that's unchanged. This invariant under specializations is precisely what's encoded in this kind of condition. It's somewhat encoded locally whereas this is somewhat then a corresponding global statement. So it's a statement that some of the homology of y with coefficients in f is somehow invariant as you move on the Bayes-Ex in a sense. Torian, where did this come from? OK. Oops. I should do this. OK. So in analytic geometry, let's say, for a edict spaces. So for analytic edict spaces, the obvious analog does not work. The problem is that there are no interesting specializations, interesting specializations in edict spaces. So you mean to make sure that this thing about proper morphism? To make sure that the thing about proper morphism say is true. So you certainly want a notion which has this implication. So for example, you might have situations that you have some ball here and the point inside, which is roughly the situation we're in here with this diagonal inside this two variable guy. And then you have some y over here. And then you have the fiber y0 in here. And then the kind of situation we're in is that f over y is maybe locally constant on y minus y0. And we want to characterize uniquely the extension to x to all of y. Then you would somehow want to look at specializations which specialize into this closed point, but from a point in B. But there are no specializations from D minus the point to the point. So all the specializations, they somehow either happen in this closed stratum, which here is just a point anyway, or in this open stratum. But then you're only talking about what happens on the individual stratum, but you know things are well-behaved anyway. And so you can't operate, say anything about the variation as you change the stratum for the kind of stratum we care about here. So I'm already over time, but I want to give the definition. The usual definition of Q and A, it's with a attaché-transar product. It makes sense. What is one? The definition with a attaché-transar product of Q and A. OK, there's another definition of ULA in the literature. And I don't know. So let me give a definition which works anyway, which is closer to the classical one. So we asked the following. One is the same condition for the specializations. You still put it in. But it's, well, in the case of interest, we always satisfied anyway. So that's not a critical part. Same conditions for specializations. But then you put a second condition, which is the following. That for all constructable sheaves, G on Y, the three push forward of the tensor product is constructable. What does it mean constructable? constructable means it's locally constant after passage through constructable locally closed stratification. But you have to be a little bit careful about making this precise because the strata might not be attic spaces themselves, but only pseudo attic spaces. But there is a way to make sense of this condition. The point is that constructively set in the sense of spectral spaces. Yeah, in the sense of spectral spaces. No, no, no, in the sense of spectral spaces. Right, so this actually, I'm sorry for going over time, but if I'm not going over time, this whole talk doesn't have any sense. So what does constructable mean? So again, in our example, that you have this open, this closed immersion of a point into a ball, the issue is that I lower star of Z mod L to the NZ. This is not constructable. That's because part of the definition of a constructable locally closed stratification is, for example, if you have an open stratum, then the open stratum needs to be quasi-compact. And the issue is that if you look at B minus a point, then this is not quasi-compact. And so if you turn things around, the consequences are following. If you have G over the ball, some constructable sheath, then there exists an open neighborhood, U of the point, such that G restricted to U is actually locally constant. That's because constructable subsets of these edict spaces are slightly funny. So any constructable subset, if it contains a classical point, it automatically contains a small open neighborhood of this classical point. And so this means that if you apply this conclusion here that actually in the neighborhood of each classical point, there will be a small neighborhood such that the cohomology over this one point is the same as in the whole neighborhood. And in this way, you actually get this kind of local constancy around this closed point. And so if you put this extra condition in, which is, for example, satisfied if f is a local system and f is smooth and has some other nice properties, which resembles the known properties of ULA sheaves for schemes, then you can actually get the argument running. OK, let me stop here. OK, so we'll take a few questions. So maybe we start by Tokyo. How can that be not must in between? So two questions from Tokyo, then we begin. Yeah, so are there any questions? So I suppose that you want to prove this five-bit conjecture. So is this construction ready to do that? Proof of conjecture? I mean, so far I'm just trying to give a proper formulation of Fox conjecture. So I mean, implicit in Fox conjecture is that there is a geometric satar key equivalent, and only then you can really make sense of it. And that there is a nice theory of sheaves on Banji and something. There is this factorization sheave property, and you really need this vision product to make sense of it. Right, and you need to, yeah. So you really need this fusion product to make sense of everything. On the other hand, once you have a proper formulation of Fox conjecture, then as I said in my first lecture here, you actually automatically get a construction of, say, my simple error parameters for all irreducible smooth representations. And maybe in the case of GLN or so, it might be possible, but anyway, to do something. Right, I mean, for the moment, it's just about getting a proper formulation of all the objects in Fox conjecture. Thank you. Are there any other questions? Yeah, that's all for Tokyo. Thank you. So, Dijin? Bessarins? In your remarks, it is possible to contract a nearby cycle pattern to build a fusion product argument to contract a symmetric monoid of stock around the probability of the middle and later cross-mining. OK, I didn't completely understand the question, but I think it was whether there is some good theory of probes and nearby cycles in this setup. And so that's actually a good question. So for usual nearby cycles for schemes, there is in particular the theorem that if you have a perverse chief on the generic fiber and you take nearby cycles, it's again perverse. And this fails. Let me give you an example. So this means that you have to actually be a little careful. So let's say you have the ball times the ball, while projection to the first factor, mapping to the ball. Then the fiber here is a ball, while the inclusion to the second factor. And then I claim that I can find a perverse chief on here whose nearby cycles are not perverse. And so let me try to draw a picture for this. So let's say this is the ball, which is a special fiber. And then I have the generic fiber here, the ball times the ball. And what might happen? And then you have somewhere a fixed point x in this ball. And it has the preimage and the specialization of some subset here. Some open subset. Well, OK, so let me talk about the open. If I take it in the attic world, it's closed, but then I can remove one rank two point and it's open. And what I can do is I can put a chief which is concentrated on some small part here. So actually, I mean, you can take J Loiswig from a small ball in size. So this is a small ball. Small quarter compact ball. Last one. Quarter compact one. And if there's inclusion, take J Loiswig of my coefficient rank. And then, of course, the nearby cycles. So this can be checked to be perverse. Then the nearby cycles are 0 everywhere except at x. But at x, you get the compactly supported cohomology of this open subset. So if I take the nearby cycles, then this is concentrated at x and given by the compactly supported cohomology of this Q, which is lambda degree minus 2. And that's too far off to be perverse. So a related phenomenon is that the art in vanishing theorem fails for rigid spaces. Again, because I mean, the ball, if you just take a close ball, it's some a phenoid space. And so you would expect that the cohomology of all constructable sheaves vanishes above degree 1. But then this J Loiswig of lambda for a smaller open ball inside gives you a sheave where the cohomology is a compact support cohomology of this U. But this goes up until degree 2. And so this art in vanishing fails. So here you are considering because I don't understand what situations you are considering. You are considering the formal model of the ball over the spec of the vanishing or you consider a ball cross a ball. Well, OK, so right. I'm implicitly passing here to maybe the completion so that. So sorry, I mean, it doesn't really make sense to talk about the inverse here. Well, imagine that inside the ball times the ball, you have some kind of cone which contracts very sinly to x. OK. That's the kind of situation I'm considering. So you have this, maybe there are several different fibers here. And you have this thing which very sinly contracts down to x. And if you have a sheave there, then the nearby cycles will be concentrated at this point. But the cohomology there will be the compact support cohomology which goes too far. And so you actually have to make sure that these kinds of examples don't occur in the situation. The condition of constructable, I mean, is it enough to check this for the skyscraper G or something like this? Skyscrapers are never constructable. But I mean, in this way, this is a very difficult to check because you have to check it for O or G. Right. But I mean, you can check it in practice because it's a theorem that it's true if F is a local system. And the script F is a local system. And this small F is a morphism, it's smooth, comodologically smooth, and it's part of the compact. And it's preserved under push R, F, lower shrink in general. And so while then using some form of procedures, I mean, for example, on the DeMazur resolution, you have some local systems. They have this property and then push them forward to the stratum and then they're still locally, universally locally. Can be done. OK, so do we still have other questions from Beijing? No, thank you. That'd be all for Beijing. OK, so are there questions in this? Oh, yeah. So in the classical geometric statutes, the category of progress sheets are equivalent to the category of representations of G. That means this setup of D the round plus, plus many, what is the representation category supposed to be? Well, so the same, right? The same. The same. I wasn't taking this as G or G. And so this is if you work with CP, but actually this Fn-grasmani is defined over QP. And then if you encode the descent data, you actually also get L group. This is where you define nearby second from the B-derarm of FG, and you are the one of the Z-wings. So the question was whether the adds is based on the VFG, which is generic FB, is this B-derarm plus G, and the special FB is this VT vector FG of G. And so whether you can have some nearby cycles there. And yes, I think you do. And it might be possible to deduce this result by applying nearby cycles to the result of the B-derarm plus. I think if you have group, as you find over CP, you can also directly define all these Bayesian-Winfield grasmani is not just over spa L times spa L, but over spa OL times spa OL. And then have somehow this VT vector FG always in the picture still. So the question is, in your abstract, I think you mentioned something like collapse in two points of space there. Well, that's basically what you do, right? I mean, spa L times spa L is like spa spa Z times spa Z, essentially, locally at P. L is essentially QP. And then you have something like spa QP times spa QP over some more absolute base. And then I'm restricting to the diagonal. And so this essentially means that it's locally, the local picture is an open subset of spa XZ times spa XZ. And then I restrict to the diagonals. I'm taking two points and collapsing them. Same point. Same point. What do you mean the same? Well, but I mean, I'm infinitesimally away from P already, right? So I'm deformed a little away from P. Now I have two points which are close to P, but different. And then I can collapse on it. OK, so no other question. Let's thank you again. Thank you. Thank you.
In order to apply V. Lafforgue's ideas to the study of representations of p-adic groups, one needs a version of the geometric Satake equivalence in that setting. For the affine Grassmannian defined using the Witt vectors, this has been proven by Zhu. However, one actually needs a version for the affine Grassmannian defined using Fontaine's ring B_dR, and related results on the Beilinson-Drinfeld Grassmannian over a self-product of Spa Q_p. These objects exist as diamonds, and in particular one can make sense of the fusion product in this situation; this is a priori surprising, as it entails colliding two distinct points of Spec Z. The focus of the talk will be on the geometry of the fusion product, and an analogue of the technically crucial ULA (Universally Locally Acyclic) condition that works in this non-algebraic setting.
10.5446/52429 (DOI)
Hello, first time. My name is Yann Ditte. And in this talk, I want to talk with you about the user and the cultures of UX design and open source. Design and open source projects seems sometimes pretty hard. And different reasons for that have been suggested that maybe designers should learn more good or that the tools are lacking for designers and so on. And I think those are all very valid reasons. But I think we should also look at the differences in cultures. There might be different values and different practices that people are used to doing. I'm going in with a closer reading of two seminal texts of those disciplines. And the first of them I took as example for open source development and open source projects is Eric Raymond's The Cathedral and the Bazaar. And for UX design, I took a text that I think many would say sort of the seminal text of UX design, which is Donald Norman's The Design of Everyday Things. And in those texts, both talk about users. And I think it would be very interesting to juxtapose those two and look at sort of their different conceptions there. And I think the most striking thing in the Cathedral and the Bazaars is that the user and the creator are the same person. And this starts at the very first idea that is conveyed about the user and the creator. Every good work of software starts by scratching the developer's personal itch. This is like the first principle that Eric S. Raymond puts forth here. Developer sees a problem that they themselves perceive and they go to solve that problem for themselves first and foremost. And this is what, according to Eric S. Raymond, every good work of software starts with. But this theme also continues. There will grow a community ideally around that open source project. And this then continues this topic that the users of that software should be treated as co-developers as the best and most efficient way to improve your program. Code improvement and effective debugging are called here as the major concerns that are connected with that. So your project is growing, treat users as co-developers. So here you have user and creator as ideally the same person creating for their own need and use and in attracting other similar people who are also co-developers slash co-users. And now we will look into the seminal text of UX design that I picked here standing for this discipline of UX design. And this is the design of everyday things by Donald Norman. And here you see very clearly that user and creator must be different. And Donald Norman suggests there is a difference and creators often become expert with the device, but they might not become experts with the task because this is what the users are doing. So you have like a strong difference here, like your creators, experts with the device with the technology and users being experts in the tasks that should be done. And that this goes into the organization of the whole project and that like teams of creators need a vocal advocate for the people who will ultimately use the interface. Norman says they tend to simplify their own lives rather than catering towards the specific needs of the users in the situation of usage for which they are experts as we learned in the previous slide. And the only way to find out if your product works is to test the designs on users, people that are similar to the eventual purchaser of the product as possible. So you should find people who are similar to those people who will use the product in the end and try prototypes and ideas with them in order to have this counter checks towards making your own life easier or being too much of an expert with the device already and not seeing anymore what is important here. So this is like a topic that is repeated in Norman's works but also in works of other usability and UX thinkers that user and creator must be different and that you shouldn't have the idea that you're as omniscient to say that you know what users want. You need to check that you need to be suspicious here. Using user and creator is bad. Different expertises in those roles is good. So here we have those in contrast. You have the idea that in the cathedral and the bazaar users are ideally the same person they creating for their own need and use they scratch down on it and they will attract other similar people and in the design of everyday things you have different expertises between user and creator who have different interests as you saw and people tend to make their own lives easier rather than the users. And creators does need to learn what helps users by empirical methods. They need to do the research. And this also assumes a certain context in which the product is created which is for the cathedral and the bazaar a community of like-minded user and creators who can collaborate and who can code. And in the design of everyday things probably assumes an organisation with somewhat of a division of labour and that organisation being dedicated to creating products that are probably bought at some point. Those are very different contexts. So some things that might seem obvious for one side might seem weird for the other. So just building a feature that seems interesting might be great according to the ideas that are put forth in the cathedral and the bazaar but for UX designers might seem somewhat strange or problematic. And testing with users before building anything, trying that slowly out instead of just coding and seeing if it works for you might be a problematic idea towards a developer in an open source project at least if you follow the cathedral and the bazaar ideas here. It's like probably not that much fun. You need some resources to get those users and you don't have this user creator in unity. And you see that also in how UIs look. And I think the best UI if I may put forth so far an open source project might actually be the command line. In the ideas how people should work together this might be a really good UI for that work because you can for example add a parameter or code another program from the command line without cluttering the user interface. It's really good. You can have a lot of options in a rather minimal UI that isn't clattered by all those functions. And you also have a lot of possibilities. Adding parameters, calling other programs on Unity can also pipe that gives you a lot of freedom to develop for the command line and have your interface there. So it's easy to modularize but as many designers will tell you it's really hard for users but because you don't see what you can do and this is the advantage for modular development because what you can't see can't clutter the interface but it also makes it hard for beginners. So looking at an idealized UI maybe from the view of UX designer as a suggestion look at applications like some Mac applications or here I think sort of elementary OS is like a good embodiment of that idea. They are hard to modularize but they are very good for beginners. So if one coherent UI it works like you know like it's based on standards there's few experimentation it's rather coherent and adding a button would be like quite a political thing that needs to discuss through and through. It's like not very flexible there is one coherent concept. So this clashes quite a bit with that idea of like-minded users scratching their itches and adding by that to the project. So a curious thing like what is modular about user interfaces, graphical user interface icons and so like a lot of open source design and some projects is like redesigning icons and people do a really good job here. I think those icon sets from LibreOffice are pretty great but one reason why that is also so popular among designers in open source projects this is one of a few things where not of a lot of code made by other people needs to be changed but this is a modular aspect about interfaces that can be swapped out easily. So this is very attractive in the sense to work on that does. One thing that would be pretty interesting I think would be boundary objects like things that both disciplines can work on and with together and I think there are some that are quite well suited for the work of open source development and designers and I think that would help with some small wins here. And the first thing I want to suggest is having like an interface guideline for your project. Those are rather elaborate documents in which there are suggestions of when to use which UI element when you design interface. They were pretty popular in the 90s for a good reason because UX design as a role wasn't as widespread then and they are I think really good. Like the main thing would be picking one for your project just take care that they really give like specific advice. Particularly the Windows ones are good I think in specific advice. Here's a screenshot from the Win32 guidelines. So they clearly show here like what a balloon popup should be used for and what not and what different parts it has. And it's like very clear even if you are not a designer you can understand when you should use it and why not. So interface guidelines. Don't write them yourself just pick one again slight constraint of your creativity but it's really useful also as a coordination mechanism and they often lead to really good interfaces. They are written by really clever people who have thought out this stuff. Design systems are a bit more on the technical side basically UI elements and how you combine them and in contrast to user interface guidelines there are rather popular web projects that have a stronger variety of UI elements and particularly of visual styles. And there are some systems like Storybook that bridge really well to the written code. Here you see your design system I worked on for our donation page. It's like just a little design system it's not super elaborate. You see here like on the left are the single elements with the spacing and some advice is then in the comments and on the right is a page assembled from those elements. So you have a lot of like basically like Lego bricks how you can combine them in that final design and not every time like sort of you call it designer or new. You have your bricks already ready made. The third and final one would be extendable UIs which is a coherent mechanism for UI extension and an API. So you see like there's an UI concern and then there's also like a programming reflection of that because one big problem is like when you are flexible when you add stuff it often makes a mess out of that program that's like things added on top and on top and on top and it's really hard to keep that under control. Also you don't want to shout at people as I know that function is not really worth like building in for a lot of users just don't do it so a better way is maybe having that as an extension. If the extension is really really popular you could like still make that an official part. Firefox and Chrome do that really well. You have for example this part in the upper right hand corner where like projects can have their icons and if you click them you get sort of a mini that view they control but they can't mess with the rest of your UI. Pretty good. I think one little problem here is those pop-ups sometimes look a bit messy because everybody does their own style of that. What might be better if you already give them a kind of design system for those extensions. Here's the Jmovi statistics application that already gives you like useful for statistics UI elements for your extensions and those extensions usually look pretty good. Visual Studio Code is pretty good at that. You have like on the left hand side the sidebar where applications can put things into. Those are I think pretty nice projects. As for the Recap even though they have quite a part of shared history there are rather different views of what a user is and should be and how to work with them among UX designers and among open source developers. Even the open source culture as put forth at least by the Cathedral and the Bazaar you have like this experimenting user creator and a community similar to them and in UX design read from the design of everyday things you have this idea of designing for others and finding out what others need by empirical research. My suggestion rather than trying to change one or the other and going into like some sort of educating them look at what shared objects could help the collaboration. I think interface guidelines are really great. Pick one that gives you very clear advice and go with that and if you really really need to you can sort of write your own addition to a common interface guideline here. You can go with design systems particularly if it's more about a bigger web page doing rather similar things but not giving usually as much as advice so interface guidelines are more sort of text and advice oriented design systems is like really those Lego building brick model and create APIs for things that change the UI and extensions for people to try out stuff without messing with the whole UI and getting into fights what should be in there or not because that both keeps the UI coherent and clean like the UX design is wanted and it also gives people space to experiment like it's very important in many open source projects. Thank you. Thank you.
Collaborations between open source projects and designers are difficult. Instead of focussing on a lack of tools or skills, I want to show that the difficulties are also rooted in different views on what makes a "good" user and a desireable mode of collaboration. Open Source projects, prototypically, focus on the developer/user who scratches an own itch and coordinates in an stigmergic, bazaar-like way, while design usually focusses on expertise in designing for others and a plan/execution model instead. While no easy fix can resolve these differences, I want to suggest some ways to ease communication for developers and designers.
10.5446/54553 (DOI)
Okay. Hello, everyone. My topic is about Open Susage Summit 2012. Thank you for coming to listen. That is what I want to discuss about. I am from Taiwan. I am from Taiwan. I am from Taipei. I am also a community member. I am also the chief of the community. I am also from Taipei. Let me talk about Open Susage Summit. First, I must grade for the community. I am also a member of the community to support our event. That is very good for us. Thank you very much. I am also a member of the community. That time, they made awesome things. Very lucky for us, we have the second times to have Open Susage Summit in Taipei. That is where we are. We have a lot of fun there. A lot of people have a lot of fun there. That is a two-day event and one-day workshop for the whole event. That is what our schedule is. We have a lot of fun there. I also do some analysis. We got a very good balance between any different topics. I am also a member of the community. There is a set of 26. Yeah. That is special. I once had a lot of fun. The first one is speakers. We have totally, I guess, 23 speakers from different places. We have a lot of people from Europe. Thank you very much for those speakers give us very good talk. We have different event types. level is not quite high. So I think most community friends in Taiwan or other country that can easier to join us and find some fun. And the second point is we have local community co-hosts and the lattice gave us very good support because the co-host community they have very good talk about a lot of things like any IT things. So, and I also want thanks and if you have chance to Taiwan maybe you can keep in touch with them. That is hard layer. And the point three is we have lot of community in Taiwan give us help and you can solve that. Our open society community there is from China, from Japan and Taiwan, Indonesia layer is very good support for us and I must introduce Sakana but I think a lot of people know him already. But that time he is not in Taiwan. So, but he still give us lot of support. So we really did a very good job but he did very good support to us so thanks Sakana. And about other community some is about they provide very good networking environment in our new and some open OCFTWs means Open Culture Foundation. They do lot of open source promotion or open source event in Taiwan and they also give us lot of help. So we cannot have that good, very good conference with our name. Okay and yeah, all communities also, oh wait, oh that I want show is, our community speaker and our co-worker speaker they all attend the whole event all day. But in that day, because we don't have good weather like those days so people who attend HSME 2012 is not so many, like what do we have? So that is still bad to us. Okay and yeah that is point we got a lot of very strong sponsors so thank you very much. And okay let me talk about something, what is special for us. The first thing is we consider about local people in Taiwan is most likely attending workshop because they want learn something and not just learn and they want also do learn by themselves so we make a decision to have one day workshop where invites people to join us and do themselves. Then another special thing is we have a very senior contributor in Taiwan then we invite him to give him a special gift because he is still doing very good things with open to say Taiwan. And also because we have one co-host study area and the member of them is a very famous Linni's education blog we call him Viber and most people who learn or who use Mandarin will know him so that is what our points. One more we have, we have very lucky we have a very good designer volunteer to make a lot of virtual things to us. The all of those is from one volunteer so that is why we got the chance to make sense. And another one is we try to co-host release party with another destroy open to Taiwan community and we do some very funny things like we have a lighting talk with from different sites have different issues and sometimes they will discuss about our difference. So it's very, very funny and that is also good help us to promoting our open to say HSM. That is before last night before open to say HSM. Okay. And I want to say some hot difficult to us is the first one is how to in both local people attend our event because in Taiwan we have more than 100 event one year. It's been 2012 there's more than 120 I guess in Taiwan and every month we have big one big is more than it's almost more than 1000 people for software technical or open source every month. So that is quite helpful to invite people to attend our event. And in the same day we also know there is another two conference in Taipei and all then more than two, no, 500 people there. So it is quite hard to us to think about how to involve people to come. Yeah. And what we learn this time and how we do next for the next year is that we have to learn how to do this. So what we learn this time and how we do next for Taiwan open to say community. First is I think a lot of local community who will be a volunteer to help 2012 HSM me so the 2015 then I guess they all do very well and they all learn some skill to host the conference and the co-work because we have a co-host. So the co-work with another community is also important and that is they did a very good job the time. And the next is what's next for our open source Taiwan community. And now we are some we meet a lot of friends there on HSM me and we try to make a plan for our yearly plans is with a safe or open stake or in the alcohol study area. Then maybe after one years we can have a mini conference and mini summit for open to say Taiwan. And okay, let me introduce next during the next session. So we have a lot of people who are interested in open source Taiwan. And we have a lot of people who are interested in open source Taiwan. And we have a lot of people who are interested in open source Taiwan. So I think we can really introduce next during the two Asia. Now already you maybe already know this time open source Asia summit is in Indonesia. And Indonesia team they did very good job and they start very hard work for that. And you also can find our always EEM page. There's already have open source Asia summit 2016. And now they are couple papers. And because I guess they will have an announcement in our news doc open to sit out but maybe later so anyway just in the police submit your papers to open to say just submit only say 39 days. And we are also once we have next open source Asia summit. Okay. Any question? Okay. What plans does the Taiwan team have when it comes to things like cost cup? Will there be a large presence at cost cup? And what assistance do you need from the project for cost cup or the board to help you do that? You mean assistance? Yeah. Yeah. I mean do you need help with cost cup? What help is it that you would need from the open source of board or the project or the community to make sure that open source of board delivers at cost cup? Yeah. For sure. We always need help. Yeah. Because that is why we have yearly plan. We want more people join our community in Taiwan. So yep. Any other questions? Okay. Thank you very much.
The first openSUSE.Asia was awesome in Beijing, China, and this time we also made it great in Taipei , Taiwan. We designed some event made it different, and connected with local community to do more sharing and promotion for openSUSE. Just like last year said , we would like to continue this event in the future, so we will take this opportunity to introduce 2nd summit what it different, and what did we miss. This talk is quite flexible, we invite you to share your local openSUSE community, openSUSE events and openSUSE promotion in brainstorm. Or give your suggestion or advice about openSUSE promotion. Overall, anything about openSUSE promotion is welcome.
10.5446/54554 (DOI)
Hello and welcome to my amazing talk about running every password you could possibly hear in a single device. EFI, Grub 2, Raspberry all at once. So, my, I'm Alexander Graf. This time for real. David was just faking to be me. I'm usually a KVM and QM developer. You might have seen me from things like running KVM, running KVM, doing KVM on power. Things that people usually seem to take us granted these days. But this time around I'm going to talk to you as a member of the Open CSARM team. So you saw on the headline the Raspberry Pi. That's what it looks like. You probably, most of you guys have seen one. I just took this image from our webpage. It also has a logo on it, which is cool. So why would you possibly want to buy a Raspberry Pi? That's very, very simple reasons. It's cheap as hell. It's available like nothing else. You can just go to a store, grab it, and it's on based. What more could you possibly ask for? But most of you guys probably want to run software on such a device, right? So you want to boot, which means you want to get into a system. Without booting on the Raspberry Pi, it's really, really simple. You take this board, then you take your SD card, and then you plug your SD card in, and then you take your power cable and plug your power cable in down there, and then it just boots. Unless you're the one creating the image that is supposed to make this thing boot, which is what we're here for, right? So let's take a look at the SD card. What does this thing actually look like? What do we have in there? That's the usual. We do have, the screen is really dark, we do have our root file system, we do have a swap partition. The usual stuff you would expect on a main device that you run your operating system from, but the Raspberry is special in one regard in that it also has a FAT partition. And that one's very, very mandatory on those devices because the FAT partition actually contains a few files that are incredibly necessary to boot such a device. The really, really most important one is boot code bin that just lies around on that FAT partition, that just tanks there. And it's so important because that's what is the initial boot code. That comes from the Raspberry Pi Foundation. It's actually GPU code, so the whole thing doesn't boot from the ARM systems, it boots from the GPU. This is GPU code that then goes in and reads the config text file, which is also on there to figure out what it should do, how it should look like, whether it should enable serial or it's what frequencies it should use, what monitor depth it should use. That's all written in config text with defaults, obviously. And we usually, if you have a Raspbian distribution, you also get a kernel on that FAT partition that you boot, which ever from the very first time we enabled the Raspberry Pi and an open suzer, we figured it was a really bad idea. So instead, we are using Uboot at that point. And Uboot then goes in and can actually break out of that FAT partition and go into your real root partition and read a script called boot script from there and load the kernel in a.d and device tree from your root partition, which means you can actually update files with normal RPMs. That was that different from how your PC boots. Very simple. Your PC boots first off by being a PC and not a Raspberry Pi. And then you have something, some storage where you put your firmware in. On the Raspberry Pi, everything's on the SD card. On a real PC, you have something sold onto your board where your EFI firmware usually lies. And that EFI firmware goes around and just looks at your hard disk, at your storage device, whatever they are, even a network, and just searches for something it should boot based on different algorithms. It finds something that it can execute and should execute in boot orders, which usually in our case, open suzer is Grap2, which then uses callbacks into EFI to proceed, load the kernel, run that kernel, and you're good. So you can see it's actually pretty similar to how the Raspberry Pi boots, just that you have a firmware piece of hardware rather than your SD card and you're not running off a GPU. But other than that, pretty similar. So what is this EFI thing really? The whole talk is about running EFI, right? So what is EFI and why are people so scared of it? I honestly don't know. Well, why they're scared, but I can tell you what it is. The really, really most important crucial core piece of EFI is something called the system table, which is just a pointer to a struct that has pointers to structs that have pointers to structs that have pointers to structs, and all of those contain either information or function callbacks. The most important one of them are the boot time services. That's basically a collection of around 50 function callbacks that you can call into to do different things during boot time. Grap2, for example, uses those. I just took a few most important ones out here. Obviously you want to load images, right? So the load image callback, if you have an image, memory, and you actually want to have it loaded into the system, you call the load image function with a binary that's EFI compatible. Anybody recognize what this might look like? You've seen that thing down there before? Yeah. It's a P executable. So basically the same thing that Windows uses. It's the same format. It's a pretty simple format, to be honest. A lot simpler than elf. So loading that into memory is not too hard. It's really just sections that you take and then put into RAM at different locations. And then you've got an entry point, jump into it, and that's all you need to do for loading. So loading an image is easy. The other really core piece of EFI is that it has a notion of objects. It doesn't call them objects, and it doesn't call classes, classes, but it is basically the same idea. So you have protocols, which are classes, and handles, which are your objects. And imagine you have a disk device, right? You've got a set of disks, your first set of disks in your system, and that one set of disks can implement a class called a block operation class. You never give names to anything in EFI. They're all based on IDs. You have really long, 128-bit long GU IDs, but this is basically just a class that says I can do block operations. And that consists of just a few function callbacks again, so all of these are structs, right? So this is a struct that basically tells you it's a struct. It's a struct inside a struct. And then in there you have function callbacks that you can call to read or write to such a device. It's very, very simple. And it can also have a second or third or fourth or how many ever you want different classes implemented by the same object. In the example of a set of disks, for example, you usually have a thing called a device path where you can find out where in your device tree this thing is. So this is like my disk is attached to a set of controller, which is attached to a PCI device, and then you can just walk through this path. So it's really just a simple means of providing access to objects. And objects can be arbitrarily enhanced too, so you can load another driver that then adds objects to your object list and exposes different devices basically with this. And the next really cool thing that EFI does, which is important for filmware, is it manages memory. You always want to know which memory in your memory space is already occupied and which isn't. So EFI maintains a memory map that you can always ask for, where it just says, all right, so at this address I have some space available, at that address everything's already occupied, at this address there's nothing there at all, at that address we got runtime services, I got to this later, and say down here is a lot of memory available again. If you allocate memory, what happens is that EFI basically just goes and says, oh, you want one megabyte allocation, all right, I'll swing down an existing available size and add one more blob for your one megabyte allocation, we turn this thing to you and that's it. It doesn't do anything beyond this. So if you don't free this, it will still be allocated by the time you exit your application, which by the way, grab two does. So memory allocation is really simple to you, you can always ask for your memory map, you can receive it whenever you like, it's always up to date, so you always know how much memory is still available for allocations. Console is obvious, it's just pointers to the console objects for standard in, standard out, and additional tables contain fancy things like your device tree or your ACPI tables, your DMI tables, everything that's just arbitrary data that you want to have somewhere in memory and know where it is, it's just put with IDs into those arbitrary tables. So it's just tuples of ID and pointer. Front-time services are a really, really cool thing in EFI. Imagine you boot your PC, you boot it up and you have a lot of RAM. And initially when you boot, of course, EFI is running there and EFI owns some of that RAM. Now EFI goes and loads grab, grab goes and extends itself, allocates a lot more memory, loads Linux into that space, Linux gets loaded into real memory, and then what? Well, then at this point, Linux actually is also an EFI binary that executes and talks to EFI and tells EFI, you know what, EFI, go away, I don't need you anymore. I'm done with booting, I don't need booting anymore. Please give me the machine, I want to have access to all of that memory without asking you for it. So it does that, but one thing that you can do with EFI is you can still preserve some of EFI in your memory space by having that blob be self-relocatable and Linux calls you with your new relocation addresses on it. So this blob is still available while Linux is running. So Linux can call into functions inside of that small space and then those EFI one-time services can still do operations on behalf of Linux. So this is code that actually is film work code that you just call into. And the most obvious examples for one-time services are get time, so you can ask EFI what time it is just to access the real-time clock. You can ask EFI to give you a variable or set a variable, you have some variable space in EFI and you can ask it to reset your system, for example. Please reboot now and you don't need to care about 50 different power management units or whatever you have out there, just reboot the system. So it's really a convenient hardware section layer for you. So how do we bridge UBoot and EFI? How do these interfaces even match? I mean, UBoot is a completely different world, isn't it? It's run for embedded, it doesn't have anything to do with a real cool service that do have EFI. Well, if we take a really deep look, memory management, yes, sure. We need to write new code to support all the memory management that EFI exposes in UBoot because UBoot doesn't have a notion or has a notion of memory allocations, but it's different from what EFI thinks it should be. But anything else, if you look at the network stack, well, there's a network stack in UBoot. So if we just write small web code, we can as well just access the UBoot web code, the UBoot web code from a random boot time service callback. Same goes for console, for disk. All the devices actually look almost the same on the interfaces if you really compare EFI interfaces and UBoot internal interfaces. So all we need to do to support calling into UBoot code from a boot time service function callback is to write a small wrapper that just converts function semantics for us. For one-time services, that's slightly more complicated because UBoot doesn't have any notion at all of one-time services. It only knows how to run at boot time and then tries to just completely disappear. However, it's not really hard to do either. We need to basically teach a function like the reset CPU function to be one-time service aware. That's a patch of about three to four lines at this point per function that we want to call. And at that point, we can just call a wrapper, run the CPU reset function, and you get fully working one-time services in UBoot. So I have enabled this for Raspberry Pi 3 or Raspberry Pi in general, actually, and the Layerscape 2085 systems. But adding new support is really a matter of a couple lines of code. So it's trivial. Get time. We don't implement it all. The reason is most boards that you have out there don't have a clock to ask. If you don't have a clock, you can't really return a working time. And same for variables. If we want to support variables, we would need to support storing variables somewhere and reading variables from somewhere that doesn't collide with what Linux actually uses at the point in time when Linux calls our one-time service code. We don't have those separate devices on most devices that we want to support with this. And for additional tables, very simple, we just put our device tree in there. We have a device tree. We can load the device tree. We put it in there. Done deal. And at that point, we actually have everything we need to execute an EFI binary, right? So if we take a look, this is just a boot log from a Zinc and P system. It just boots up into UBoot and then gets you to a shell. So what we need to do to boot an EFI binary on these with current UBoot. So if you just take UBoot 2016 or 5, for example, it's all implemented. What you need to do is load your GRUB binary, load your device tree, and call boot EFI. Done. Get a GRUB. As simple as that. So if you take a current UBoot, you can always just manually load an EFI binary, which could be GRUB, which could be Linux kernel itself. It's also EFI binary. Can be anything that you like. It could be the open BSD boot loader, whatever you prefer. But if we go back, actually, and take a look at this excerpt, one thing you might have realized while reading through all of these lines here is there's something down there that's called autoboot. So what is autoboot? Autoboot just means that UBoot goes in and executes a boot script that's predefined in the configuration. And on most systems these days, that's a distro script. It's just a set of templates that go through a list of different devices and searches for known good sources of booting from them. So really the distro script just goes in, searches your disk, and looks if it finds an EFI binary at the spec defined removable location path for EFI binaries. That's just part of the EFI spec. And searches if it can find the device tree. If it doesn't, it just doesn't load it. And then usually, of course, your EFI binary is grub 2. So you don't press any key at all, and it just automatically finds everything and you get grub. Which is pretty much how you want to boot these days. You don't want to mess with boot scripts, you don't want to mess with anything at all. You really just want to have things work out of the box. So are we standing? Pretty good. As you can see, everything is implemented. So we have EFI objects. We have console support. We have disk support. We can even do pixie boot support. I see pixie boot by now. Video support works. You can see graphical output. We can run grub 2. We can run linux. I have not tried to boot windows yet. There were patches on the mailing list to enable X8664 about two days after I posted my first patch set. So it really is not hard to enable a new architecture. But they haven't actually progressed since because the guy was doing something different since then. And I don't really care too much about booting windows right now. If you compare code sizes between enabling EFI support and not enabling EFI support, you can see that the difference is negligible. You increase your code size by somewhere between 10 and 20k. Which if your code is already 500 kilobytes big, 10 or 20k doesn't really account for anything at all. It's completely negligible. Which means that upstream we are now enabled by default. So if you just take Uboot and you do a dev config for a board, you get EFI support on today's Uboot. You don't have to mess with anything at all anymore. You don't have to mess with U images, with Z images, with anything. You just run grub and it works. Since I'm telling you that everything works, I should probably also show you that it works, which is more fun. So for the demo, I'm going to plug this Raspberry Pi 3 I got in here into the HDMI connector. And connect power. Let's hope this one works. All right, there you go. So this is Uboot. This is Grub 2 booting up with graphics and everything. All the bells and whistles you would usually know. If you have a USB keyboard attached, you can even use the USB keyboard to edit a command line, do whatever you like. You boot into Linux and there you go. You get a fully working distribution with just the way you would usually expect it to work with all the Yask boot loader things working, with all the Pearl boot loader updates working. Everything just works the way you would imagine it to have worked from the very first day. I'll just leave it at that one. So the next slide is really just to thank you and go out and try it for yourself. And do you have any questions? Saifah, please get a mic there right next to you. Yes. What's the idea behind slowing down the Raspberry Pi boot even more with more complicated stuff? I mean, the one thing, if this doesn't work, people will go Googling and ask why it doesn't work, everybody will tell them to go away. Just boot the Raspberry Pi like it's supposed to boot. It depends on what your goal is. Is your goal to support the Raspberry Pi? Boot the way the Raspberry Pi does it. If you want to really actually do it the Raspberry Pi way, use the downstream kernel, use whatever method you like, I don't care. If you want to boot the Raspberry Pi, like you boot any random normal system, if you want to actually go towards a maintainable future, you want to have that path. But then it needs to speed up a lot more because I mean, it's a lot slower than booting a normal boot. There was a 10 second timeout. Yes, it does boot slower, but that's all things that are going to go away. The big advantage of this, in particular for the Raspberry Pi is that you now have a boot menu to select different kernels. Because the way it worked before is that when we updated a kernel and the new kernel for some reason didn't work like some kernel head, and then you were kind of stuck and you now had to take the SD card and put it into different machines, tweak siblings and so on. And now with this couple seconds more in the boot process, you can just select a different kernel if the default selection doesn't work and you still have a working system. You can also add a command line on the fly from the boot loader. It just basically moves your Raspberry Pi from a device that you target to a device that you work on. That's basically the shift. It goes from this is something like an embedded device, like a mobile phone that I just want to cross compile things on my PC on and then push them onto the device over towards this is the device I'm actually working with and working on. You would never imagine to directly flash your kernel into your flash chip on your PC, would you? Well, maybe you would, but nobody else who is saying would. So would this also mean that if you have butterfs and the snapshots that it's all just working? Yep. But if it's snapshots, snapshot, wall-back booting, all of that just works out of the box. The other question I have is around Pixie Boot. So Pixie Boot is not implemented by the firmware in that case, so it's really done on the UFI side or? Okay. It's probably just a naming problem. EFI is the specification that tells you how to implement these interfaces. Uboot implements that specification and Tiano Core implements that specification. You can use both to boot your system and at that point, because you're booting your system from those pieces of software, they become your firmware. So on the Raspberry Pi, Uboot is my firmware. On your PC, a Tiano Core-based fork from your vendor is your firmware. It's just a naming thing. And that firmware implements EFI. Or actually, it implements UEFI because it's 2.0. But it will work. That's the important part. So again? Pixie Boot will just work. That's the important part. Yeah. The point about this is it won't be any different from a PC at that point. As soon as you have your Uboot on there, it will behave as a PC does or like the overdrives that we saw earlier do. Just at a much lower price point. More questions? Size of relief? All right. Great. Well, thanks a lot. Thank you.
Booting is hard. Booting in the ARM world is even harder. State of the art are a dozen different boot loaders that may or may not deserve that name. Each gets configured differently and each has its own pros and cons. As a distribution this is a nightmare. Configuring each and every one of them complicates code that really should be very simple. To solve the problem, we can just add another layer of abstraction (grub2) on top of another layer of abstraction (uEFI) on top of another layer of abstraction (u-boot). Follow me on a journey on how all those layers can make life easier for the distribution and how much fun uEFI really is. After this talk, you will know how ARM systems boot, what uEFI really means, how uEFI binaries interact with firmware and how we are going to move to uEFI based boot on openSUSE for ARM.
10.5446/54564 (DOI)
My name is Christopher Hoffman. I'm working for SUSE for quite a while now. It's actually 16 years now. I spent a while in different other teams, for example, also the Open SUSE team. I think it was about four years ago when we kind of started our continued developing also OpenQA. We took it over from Bernard Wiedemann. After I changed to the YAS team two years ago, it was kind of a good coincidence that we also decided in the YAS team to use OpenQA a bit. So actually, this talk should not be about basics of OpenQA. Just to explain it in a nutshell, OpenQA tests our installation process and more or less does this by just sending key presses to a QA more instance where an installation is running and it checks back whether that happens, what should happen by matching screenshots. This is more or less like the famous monkey and a typewriter more or less. And since the installation process is, let's say, nearly 100% YAS, we as the YAS team and what we are developing affects a test run of OpenQA quite a lot. So whenever we change something, it is most likely that we even break a test run in OpenQA because even a slight change in the UI, for example, lets the test run fail. And since we are in the meanwhile also working together with the UI design team, this happens quite often that we change actually a UI. And what to do now, how to avoid those breakages as much as possible and help our QA department. We should test as soon as possible. So if we change something in YAS, then we should even always keep in mind that this might somehow affect OpenQA and QA department. And so we can even kind of warn the people from QA that we change something what they should expect to break test run. And what we also try to do is that we can even maybe deliver an adapted test run or the matching screenshots together with a new feature. Yeah. So to be able to test what we changed, we of course need to get our new software, our new packages into OpenQA. Since OpenQA tests ISO images, the obvious thing would be let's create ISO images and test it and maybe even on every package submission of any YAS package. And since massaging a DVD image takes quite some time and needs some resources, maybe the better approach would be just pack together what we changed, put it in a drive update disk. I hope you know this feature actually. I'll show you later. You put all the changed stuff in the drive update disk and this just contains a few packages. It's easy to master and then you can feed it into OpenQA. And this is, yeah, to create such a drive update disk, there's a nice command. It's actually developed by one of our team members. And there's an extra package for that. Just use this command line. And then you, drive update disk falls out, put it on an FTP server. And then if you are kind of familiar with OpenQA, schedule a test run with this parameter to let just include the drive update disk into the instances. To do this, you of course need shell access to the machine who actually runs the test. And if we would use the official instances of OpenQA, kind of every developer who wants to test something needs access to that machine. And this is obviously not a good idea. But to develop tests and enhance tests, you need this access more or less anyway. So most, the most obvious, most best idea would be just run your own OpenQA instance on your machine. But if you don't develop OpenQA tests that much, you don't need that that often. And after a while of not using it, you need to update everything. Let's say after two months, you need to update everything, the testing software itself and the test runs and all the needles. And this is a knowing effort that not everyone has to do on its own. So what we did in our team, we decided to get an open-QA server for our team where we can more or less offer our team members running OpenQA instance where they also can go and develop. And the side effect is that it's much more powerful than most of the desktop machines you have on your desk. So it was maybe a bit fast with this slide. But how can one develop tests on a machine? How can several people together develop tests on a machine without breaking the other guys' stuff? Because the installation process needs a specific order of tests to more or less get a running system out of it. And if at some point it breaks, for example, if it breaks in the time zone selection, the installation cannot continue. And if I change the time zone test and it breaks for a reason, someone else that wants to change another test cannot test his own stuff because he doesn't get to that point. So we needed a way how several people can have their own code on the same OpenQA instance. And there's a nice feature in OpenQA that we, of course, obviously need. We can test several distributions there in parallel, which have their own test repositories, their own need repositories. And we can abuse this just to create kind of a fake distribution per user where we can, in parallel, develop without affecting each other. And OK, this console output might be a bit strange, but it's more or less the directory structure of OpenQA. All the tests are in the test directory. And let's say a user, Kenny, wants to create his own distribution for himself to develop. He creates just a directory in this test directory where, for example, there's also the OpenSus and Sli, the regular distributions we are usually testing. And so Kenny creates his own distribution. Kevin can fork the tests of OpenQA and his own repository where he can develop. And after he finished his development, he can open a pull request to AppStream OpenQA. After cloning all the stuff in there, you still have to create a few other directories, which is somehow not so straightforward, but as it works. As to create also a sub directory under the products sub directory where the main PM, which is the main schedule of OpenQA, needs to be copied from the distribution you want to base your stuff on, which in this case is OpenSusa. And then you also need the needles directory. Short explanation, the needles are those things that you try to find in an haystack, which means those are the screenshots who you compare the screen output of QM with to find out whether you see what you expected to see. And those are obviously different between those main distributions Slee and OpenSusa, because we have a different theme there. The just on Sles is black backgrounded and the one in OpenSusa is, until now at least, a gray backgrounded. So we have different screenshots for different distributions. And so if you want to base your fake distribution on that, in this case, use the OpenSusa needles. And this is what the result of all this. And after you created your fake distribution, you can use the, you can schedule jobs. In this case, the best offered is usually the tool clone job, which clones already existing old test runs with some tuned variables. In this case, you have to, you clone the existing already old existing test run 171. And overwrite the parameter this tree with your new own distribution name, which is Kenny. And so you cover the OpenSusa distribution you want to test with your own, that it uses its own, your own test directory, your own needles and your own tests. And you get a nice test run. Yeah, let me just check where I am. And so we can share one machine for several people to develop tests. Of course, this is not the only thing. It will be definitely, or what helped us a lot is that we talk much more to, with OpenQA, with the QA department. So in all of our sprint meetings, so means sprint planning, sprint review meetings and standard meetings, we have someone from the QA department that we can really directly contact. This guy here, Joseph, and this helped us also a lot to avoid duplicates in work. So this doesn't work everywhere. We had this case where, recently this case where actually QA and ourselves developed the same test. But we found this out quite early. Then what else can we do with this, with our own testing machine? Of course, if you have a powerful machine you want to do, you want to automate stuff. What I'm planning for the future is actually, I also want to, I want to create those drive update disks in an automatic way as soon, maybe even on check-in, so something with Jenkins that we create, drive update disk and feed this automatically to our test runs. This might help us in, have even less work with testing and support QA department with that, but I'm not yet at this point. Oh, there's another slide for this. So I was a bit faster than expected, but I guess you have some questions about that, and feel free to ask them now. Oh, yeah. You need some microphone. You can also use sign language if you want. So, yeah, none. It's off. Okay. So maybe the PA guy can rise the slider. It's working. So I'm not too familiar with OpenQA, but I was wondering if it's able to test things which are not the installation, but rather, for example, a web application running in a browser. Can you simulate a test of clicking and using a web application within this operating system? So from the just side, from the just view, actually, we only care about installation more or less, and the system is usable. OpenQA, if you ever checked it, if you checked our instances, we even test applications. One of them is also Firefox. But testing a web application would actually include something like use boot existing installed image. This is possible. Study your application and then try to use your web application. But mouse clicks is something we did not use yet. It does support mouse clicks. It does work. We have used it for testing the OpenQA web UI. So it can do it. The short answer is apparently you do. Like Dan, when we developed it, mouse was not really good to use. It had its bad days, but it's better now. You had a question? Kulo has questions. What I'm wondering is on who is responsible and on what schedule are you updating the instance and the tests you face your development on? So it's coordinated during the sprint meetings or you're just floating? The point is that I did not really very good understand you because the speakers... On what time frame and who are you coordinating the updating of the system and the tests that you're developing on? The test development is more or less just a manual thing. So someone gets the job. Someone in our team gets the job. You implemented a feature. Please write a test for it. And then someone needs a test and does this manually. So after manually testing the test, you can do your submit request. So it's not that we automatically use it for now. What I want to do is in future, of course, is doing something like that. But it's not yet the case. Like the system, like the OpenQA installation. You mean how I update the OpenQA installation? Yeah, who does this and also coordinating this in the team? Usually so far I do this and we have a crunch up for the needle for using fetch needles. This stuff is updated every 15 minutes, something like that. But the OpenQA itself, I update when I see it as appropriate, which was recently the case when you changed the UI a bit. As soon as we notice that something does not work as expected or different than on the official instance, I know that I need to update something. The tests and the needles are always up to date. No one else. Okay. Then I'm still a bit early. So, thank you for listening my talk. Join the conversation as the presentation template suggests. Have a lot of fun and join us. That's it. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Bye.
OpenQA is openSUSE's powerful installation testing environment. It normally tests whole ISO images that need to be mastered first it is not very straightforward to check single packages within the development process of new features or bug fixes. I'll show you how we managed to test our stuff as early as possible without mastering whole ISOs and how we enabled our developers to easily adapt existing openQA tests to changes in YaST's behaviour and user interface to be able to deliver updated openQA tests along with updated YaST versions.
10.5446/54569 (DOI)
Okay, so I'm going to talk today about one addition to the MySQL database server, which is not that visionary as the previous talk, but, well, I guess, still pretty technical and very useful, I hope. Okay, so we are going to talk about the MySQL firewall. Right. Okay, so first of all, a little bit about me. It's my fourth conference already, so I guess most of you know me, but I've been with MySQL for, well, longer than I can remember now. And I've been, I'm working on security and monitoring the server. Also some personal facts about me here. All right, so the agenda is, first of all, we will actually try to understand what is the MySQL firewall. How does it work? And we will also have an example with a WordPress installation, how to secure it and how to make it more robust. And hopefully we will have some time for discussion, too. Okay, so what is MySQL firewall? It's really, really simple. It's just a tool to make SQL injection attacks harder. And so SQL injection attacks are one of the most well-known data breaches to web applications. So here is a quote from a very reputable yearly report that is done on security. If you are not familiar with Verizon's report, probably you should if you are interested in security. And that's what they say. They call the SQL injection attacks the elder statesmen of bridge vectors into web applications. So it is actually really important that, well, we should take all measures possible to protect against them. Okay, so why MySQL firewall? It's a better SQL application security. User accounts can execute only the SQL that the application provides to the server. It also provides defense in depth. So it's an extra layer. It does not interact directly with the other layers of security. So you can take this extra measure and provide another level of defense for your database. And it also does not require application changes. Your application runs as it does. And the server is the one that can take and apply that extra security measure. And here is my favorite cartoon on the subject. I don't know if you are familiar with XKCD, but this is really hilarious. So this is why SQL injection is important. And this is what it can do to you. So yeah. Right. So how does this all work? What does the firewall... What is the firewall, basically? So it is an engine that sits inside the MySQL server and it compares the incoming queries and then normalizes these queries. And well, it has a statements cache which basically keeps a cache of all the queries that are allowed to be executed. And then it compares the incoming queries against that statement cache. So this is in a nutshell the architecture of it. So here is how it operates. You have the MySQL server and you have the firewall plugin which sits in front of the server proper. So when a query comes in, it goes and checks, okay, so this is a query that I need to normalize. It normalizes the query by removing the constants and removing the commands and the whitespace and all of that. And it searches the statement cache and basically it finds such a query which is select question mark plus question mark. So this query can actually go in and well, be executed by the MySQL server. What if another query comes in? So this is the most popular form of SQL injection. You take one of the constants and you replace it with another statement. Well, part of the previous statement and then the statement that does the actual harm. So this is what this query looks like. If you replace the two with this, well, extra data there and it's not properly escaped, you will get a statement like that. Select one plus two or some, select query or some update query, whatever. But it's not in the cache so the firewall will deny it and it will never even land in the execution engine. Okay, this may be a bit small, but right. So this is the full state diagram of the firewall. So it receives a statement from the client. It makes a digest of it, then it checks if the user is well, protecting or detecting in queries, unknown queries. And if it is in the white list, then it is executed. If it's not in the white list, it is subjected to this additional measures and eventually rejected. As you see here, there is a recording mode, this part here. So when the user is in recording mode, the query will still be executed, but it will also be stored into the white list. So this is how you fill in the white list with queries. Okay, so installation is pretty simple. This is our GUI tool for administering the MySQL server. And it has a firewall section here. As you can see here, it's already installed. So you get the options to uninstall or disable. But well, there is a button which says install here if it's not installed. So it's that simple to install it in its basic form. So behind the scenes, it's of course all command line and being kind of the traditional MySQL developer that I am, I wanted to show you here the command to actually install the firewall through the command line. So it's this single command at the top. We have an SQL script that does all these installations. And then you can check whether the firewall is operational by checking the status variable. That says if it is or not. Okay, so operating the firewall. I'm not going to go through all the modes of the firewall. Just give you a practical example. I've taken a WordPress installation and I, well, run it through the firewall just to demonstrate its usefulness. Okay, so step one, install WordPress. It's, well, I think that most of you have done already probably. So I won't go into the details of that. But the WordPress uses a default database user, WordPress.localhost. And it runs against obviously a local MySQL server because that's how the user account is defined. And it's also, the MySQL server is also seeded with schema and data for WordPress. There is a script and procedure for that. So I've just installed the WordPress with all the default settings, as you can see here at the start page of my WordPress installation. All right. So the next thing we do is we need to put the firewall in recording mode for that particular user because there are no queries in the white list. So we need to add some basically. And we add that by putting the firewall in recording mode and then, well, running through the WordPress web page is just to, well, make it execute the queries that it executes. So how do you put a user in recording mode? You go again in the SQL workbench and you basically take users and privileges and then, well, select the WordPress user and change the mode to recording. So again, a GUI operation. But for the people curious how to do it on the comment line, I have the comment line on top as well. So it's a call to one start procedure that enables recording mode, basically. Right. So step three, this is the relatively non-trivial part. You need to click through all of the WordPress sequences that you want to stay enabled when you move to protected mode. Okay, so what I did was I went into the WordPress installation and I saved the draft, created the post, saved the draft of it. That's pretty much what I started with. And this generates some queries as can be seen here. So this single sequence generated 63 queries, which is quite a lot. And as you can see here, some of the queries are actually normalized. There are obviously parameters here which are the result of the normalization. So these are all the statements that the WordPress installation would execute against the MySQL server. Right. And you can actually monitor them in the GUI. There is a firewall rules tab in the GUI. It will show you the active rules and the rules that are being recorded. So as the WordPress, your clicking through WordPress goes on, this part will actually increase more and more. And there will be more queries here. You can also do that on stages. So basically, you can take the WordPress installation, do the basic exercise through its pages, and then save the result. You can even save it to a file. Then you can load the file into the running MySQL server table and so on. And once you, well, are done with that, and you discover that you forgot something, you can put it again in recording mode, record another session of it, and then add it to the already existing rules that are in effect. So it can be an incremental process. You can do it on different users because obviously these queries, they don't really depend on the actual user name. So you can have like a test user that you use to record the queries and then move these queries to the active user that the WordPress uses. There's a lot of possibilities there. Right. So once you are done with that, you can move to the interesting part, which is basically a shields up mode. So you say to the server, OK, now I want this user WordPress localhost protected, not recording anymore. Now this basically activates the firewall. OK, once you do that, you can then continue clicking through WordPress and observe the statistics. So these are the statistics after one page after me trying to publish a post because, well, I didn't record it publishing the post. So when I tried publishing the post, it didn't work, of course, because the firewall prevented it. So I got 50 queries denied as a result. Then me doing some more actions down the line, you can see how the access denied and access granted counts increase. OK, so what does suspicious mean here? As I mentioned, the firewall can just record a violation. It will not stop the query from being executed. It will just record that this query was executed so that the DBA can later review the activity. So this is what suspicious means. It's either denied or suspicious, one of those two. Right, and this also gives me a running count of my 63 queries that I recorded and that are active in the, well, the whitelist table. All right, and this is what the application would get when a statement is prevented through the firewall. So if you execute a statement, select version is obviously not something that the WordPress would execute. So I get an error as a result statement was blocked by the firewall. And these are excerpts from the Apache error log that logs that certain WordPress operations were actually denied by the firewall. WordPress is trying to hide this fact. So basically when this works, you will not get any indication on the WordPress pages themselves. The operations will just not complete. So basically, well, if you click publish a post, it will say OK. And then the post is not going to be published. There's no error indication there, unfortunately. So this is how you check that the error actually was a database one. All right, this was the basic mode of operation of the firewall, but it does support some additional options as well. As I mentioned, it can lock the queries that are suspicious, not in the whitelist. And it can do that instead of or in addition to blocking them. So you can block and lock. You can just lock. You can, well, just block, obviously, or do nothing. OK, so the tables that store the whitelist's rules are not anyhow special. They are just normal server tables. You can copy them into other tables. You can, well, inspect them. You can add additional queries to them. It's basically all there. There are ways to manipulate these statistics that I've shown previously. So these statistics, if you want to reset them, there is a way to do that, of course. And it can, as I mentioned, it can aggregate through sets. You can record one session, then somehow add manually additional things to the whitelist, record another session, add it to the whitelist, remove stuff from the whitelist, basically manipulate it fully. Right. And that's pretty much how it operates the firewall. OK, that concludes my overview of the firewall. I guess we are a bit early here. So that will leave us with some time for questions. Any questions? Anybody? Yes, sir. Is the firewall function available within Embedded MySQL? OK, so you are asking whether the firewall would work with the Embedded MySQL server. Well, with the Embedded MySQL server, first of all, you don't really have user connections. Basically all of your so-called connections to the server are running with root privileges. People don't really use it like that because you cannot really connect to it to set up the whitelist and all of that. It will have to be done by that same user, so it kind of makes it a mood point. And technically, eventually we can make it work, but we just don't see the practical usefulness for it. How would you use it? To be honest, no idea. I just had somebody ask me about Embedded MySQL, so I was curious as to what security features are within Embedded MySQL and whether this would be able to provide some of those features. Well, Embedded MySQL is more like Berkeley DB of a sort. I mean, it does not have a user model. In the Embedded mode, all of the ACLs are practically turned off. So it's just a database store, I mean, a table store, nothing more. Okay, so first interesting talk, and I'd say it sounds like up or more from my SQL. I'm really happy to see that. So first question, do you expect MariaDB to catch this up? Well, I know that my former colleagues from MariaDB are fully able to catch up with this, whether they will do it or not. It's, of course, driven by their own interest. Yes, okay. So it's possible at least. Everything is. Yes. The next question you showed, how to check, set the rules with the GUI. Is the comment line to do it? Yeah. Okay, so I had a little bit of a demonstration of the GUI here. So the way you manipulate the rules is basically through these buttons over here. So you can save them to a file and then edit them from there. We don't really have like a manipulation, oh, well, sorry, there is a nothing delete. So we can also do individual rules. We also provide some functions for you that will do the normalization. So if you have like a normal query, but you need to store it as normalized, that's how the server will use it. Yeah, there is a special SQL function that will convert each query to its normalized form. So is this available on the comment line also somehow? Or do I have to do some row SQL queries to insert it into the whitelist table if I don't want to use the GUI mode? Well I didn't actually try the add button, but I suspect that it will normalize it for you. If it doesn't, please file a bug. We will make it do it, of course. So in the last question, let's say I don't, I have not only one WordPress installation on my server, but 10. Is there a way to say apply the same rules for this other user, but of course with another database? Okay, so basically you do it for one of your users, then you save it as a file, then you go to your other user and type it from a file. You can copy these things. Okay, thank you. More questions? Anybody? No? Great. Thanks for showing up and thanks for your attention.
MySQL Firewall is an application level firewall filter that intercepts incoming queries and validates them against a database of normalized "safe" queries. As an integral part of the server it takes advantage of the parsing and normalization that is done anyway so it has minimal impact on normal operations. The firewall has multiple modes. In learning mode it collects the incoming query normalization in a scratchpad that can be persisted to disk. In alert mode it will just alert the DBA for an unknown query but still let it pass. And in protecting mode it will reject all unknown queries. The firewall can be used to limit SQL injection or as a complement to the privilege system to support only particular front end applications. We will go through all of the stages of installing, training and arming the MySQL firewall with understandable examples.
10.5446/54571 (DOI)
So, thank you to Martin. Martin, Katie's most famous Quinn developer, in fact the lead Quinn developer, I should say. This talk is going to be about Katie and Neon. So, relatively new thing. Who here already knows a bit about Katie and Neon? All right. So, fair number. So, it's like microphone with a cable. It's like in the 80s. Next thing you know, I'm kind of hunting around the stage. My name is Harold Sitter. I am a Katie developer. I work for a company called Blue Systems. And we do primarily Katie things. Blue Systems invests very heavily in Katie. And one of the projects it sponsors is Katie Neon. So, what is a Neon? Well, Neon is a noble gas. It is a noble gas that is primarily known for neon lights. Very lovely lights that are created by sending energy through ionized gas, plasma. But let's start at the beginning. What is Katie? So, this is Katie, these lovely people. Katie is a community. If you have been around in the Linux community for a while, you might also know it as a bit of software, except it's not. It's a community now. And a very good one at that. Look at all the lovely people in that picture there at our annual conference. Academy last year. It was in Spain. And this year it's going to be in Berlin. So, if you are around, I think it's in September. If you are around in Berlin in September, you might want to come to QtCon, which is the overarching event where that is being held. Katie is a community whose vision it is to bring the entire world, freedom and privacy in the digital life. The primary way we do this is through software, obviously enough. And one key component of this software is called plasma. Plasma is this lovely thing here. It is one of the main Linux desktop environments, if you will. Everyone knows plasma, I presume. Who doesn't know plasma? Lovely. Really? Come on. So, that's plasma. It's really gorgeous. Look at the colours. But we are always doing other software. We are doing weird tennis ball sort of things. This is marble. It's our desktop globe, I think we call it. It's basically a competitive product to Google Earth. And it's using free map material to render a 3D globe of whatever you want. Street maps, open street maps, or like this, just a random rally of view. Where does it do perhaps weird things? We create an army of potato people using K-tubeling, which is a game for toddlers, I would like to say. I think occasionally you might also find a developer playing with it, because why not? So we've got a widespread in the software area, but we are always doing non-software things. One of the primary areas where we do this is a new project called wiki2learn. It's basically a media wiki platform that enables student and teachers in academia to share all sorts of materials regarding teaching and learning. And it's basically a resource for free information sharing, if you will. Bloody cat is... So, okay, so now we know what KDE is. We know it does software and not software. So what is a neon? Same thing again. Now that's confusing, isn't it? So this is actually a picture of neon, confusingly enough. It's a picture of plasma. So maybe it's a neon plasma. I don't know. Wikipedia had to say the following. Neon plasma has the most intense light discharge at normal voltages and currents of all the noble gases, meaning it must be very bright. Which is what neon is. And that brings us to the meat. Neon is a project that seeks to get KDE software to the users in as quick a time frame as possible. Now KDE neon started as a project. The idea of it probably happened last year in Los Angeles where Jonathan Riddle, a colleague of mine, and I found ourselves wondering where we want to go in life. At the time we were both Kabunto developers. You probably heard of Kabunto. It's an Ubuntu version with KDE software on top. And we found ourselves wondering where do we want to go with all of this? Where do we want to go with software? Where do we want to see the Linux desktop? Where do we want to see KDE users get the best possible KDE experience? And we found that the way we did it in Kabunto was not necessarily the best sort of approach. Kabunto is of course a distribution. It is a distribution that is part of an ecosystem of many, many, many distributions of many stakeholders that all want the different things. The GNOME camp wants the thing, the KDE camp wants the thing, and the X-Face camp wants the thing. And you somehow have to fit all of that under the same hat that is your distribution. And that's tricky and it's exhausting. And it means compromise. It always means compromise. There is no distribution. I have ever encountered where there didn't happen compromise on some level. And we were fairly disgruntled with that entire idea of having to care about other people's software if or rather considering that we find KDE software to be vastly superior to all the other software out there anyway, bearing a couple of exceptions. Browsers. So we came up with KDE Neon. KDE Neon is a Linux binary project which doesn't really mean anything, I suppose. It is based on stable foundations which OZO doesn't mean anything. It is a rolling workspace, namely plasma, which OZO doesn't mean anything. And it is focused on KDE software which really means everything. In KDE Neon, this is essentially the entire mission of KDE Neon. If we get this right, then it's going to be awesome. And the way we achieve this depends largely on what is available at the time. So we kind of for plasma, in order for plasma to work, we need Linux. Now there is not one Linux, so we need a base system that we can build plasma on top. And then on top of that, we can provide all the applications from the weird tennis ball applications that are globes to the army of potato people. And we want to do that. And so those are our requirements. And that's what we came up with. So we took Ubuntu 16.04 LTS, which is a long-term support release, which means it's going to be released. It's going to be supported for the next two years until the next version comes out, actually longer than that. And on top of that, we stacked all the necessary bits that we needed in order to provide the best plasma experience. Now this is all fairly limited to plasma. Basically, the only reason we even have this Ubuntu base is because there is no one Linux, which is a bit of a troublesome issue to begin with. But something we need to deal with. So we took that because it seemed the best fit at the time. If you have a question as to why exactly we chose this particular base, please ask a question at the end, because it's a lengthy explanation. KD Neon comes in two editions. One is the user edition. The other is the developer edition. The developer edition additionally has two sort of release modes. So let's start with a look at the user edition. The user edition is, as the name suggests, very much targeting users. It is released software, it is hopefully very stable software. It is software that is built from the release titles. So this is probably the closest version of Neon you would find to a traditional Linux system. And it's targeting the sort of person who likes to have up-to-date KDE software, but not necessarily broken one. Like the developer editions, where the stable version essentially is the stable incarnation of whatever KDE software there is, but it's still a build essentially from Git. It is the daily build, as it were. And the same for unstable, except it's from Gitmaster, so it's even more horrendously broken. The way we do this is with a whole bunch of technology. Probably 10 years ago I started to stop doing packaging as we usually do it by hand, which I found incredibly tedious and annoying. So I automated everything to the point where today Neon is being built in a fully automated fashion. It is continuously integrated, that's where the developer editions come from. And additionally, we are also continuously integrating or out-tomizing the way we build tar balls into the final binary packages. So ultimately I'm standing here and right this moment our thing could build some new plasma version and I wouldn't even know about it. In fact, I don't even care about it. It's not like anyone would notice if they have a new plasma, so stable and sexy already. And all of that is partially quality controlled. It's partially quality controlled because quality control is a bit of a fricking thing, but eventually I would like to get to the point where it is fully quality controlled. And the backbone of this stands Jenkins with horrendous amount of Ruby code backing it up in essentially figuring out where the tar balls, what are the versions, what do we even need to package, when do we need to ship it and so on and so forth. Additionally we have an enormous number of Git repositories that are being backed up by some additional Python technology. And these Git repositories are essentially versions that are semi derived from Debian and semi derived from Kubuntu because we don't want to repeat work that someone else already did and at the same time we tightly collaborate with Debian to get what we are doing back into their system. So we're trying to share a lot of work on this department. And the third pillar so to speak of the technology that Neon is built on is Eptly, which is essentially a very fancy repository written in the fanciest of languages named Go. Whereas Neon going in the future, obviously containerization is the big topic of the day. I'm actually proud to say that I do have working prototypes for Snappy and I sort of know how to build a prototype for App Image. So bearing complications, I hope to have some sort of containerization with good coverage by the end of summer. Of course more quality control. Open QA certainly is very high up on there to do as it really helps with getting the coverage up. Right now we're basically doing unit testing, a whole bunch of library testing, you know, new testing, ABI testing, that sort of thing. But libraries are only half of the software that we ship. In fact it is, you know, looking at the code distribution it's probably a minor percentage of the software. And we're also looking to adopt a new installer. Right now we're basically using the Ubuntu stock installer, which is terrible. Just terrible. Oh, yeah. No one expected that one. Okay, ah, we're done. Finally. I expect you have a lot of questions. I hope you have a lot of questions. If you want to know more about Nian, you can go to Nian.KD.org or visit on some social media finger matrix and you can find the slide on speaker deck. Let's go. Questions? I'm going to come down to the front because I don't like sitting at the back here. I have lots of questions. I have one really big one. I guess I'll start with that one. It needs a bit of preamble. I'll try and be quick. Back in 2011 we started tumbleweed the first time, which was basically what you're doing here. Your base rolling stuff on top, and it started out pretty much like you've got right now. Good beginnings. And then KDE needed more and more and more. And the list of pluses that you have right now are stuff that you're effectively invalidating from the distribution, the bit that you don't want to be messing around with gets longer and longer and longer in order to get plasma working. How do you intend to fix that? Right. Without, sorry, just to finish the point, without going, oh, it's fine. We'll rebase it on the next version of Ubuntu because that's what we used to do. Basically we found this was an intransient mess that we couldn't avoid. We either ended up doing way more distribution engineering, which is the very thing you don't want to do in this, or we ended up pissing off all of our users with massive distribution rebases of a new base all the time. So what's your plan to avoid that? So I think we have to differentiate what distribution work is. There is foundation work, which we are sort of okay with. It's not something we can avoid. At some point we will need a new Messer stack, and at that point we will have to get a new Messer. But we don't want to mess with GNOME software. If GNOME software, you know, software that's integral to the GNOME workspace experience, if that breaks as part of Neon, then we don't care. But that's where we are setting ourselves apart from distribution. Compromises that you have to make, the compromises that you're worried about very rarely come from GNOME. They come from the new system D, the new Messer, all that engineering stuff. So then what's the point of Neon? If you're still having to make those compromises and you're still having to make them on Ubuntu, why not use Ubuntu? Oh, yeah. So Ubuntu has two things speak for Ubuntu. First of all, the long-term support release sort of thing, which we generally agree with. Actually, it's three points. Secondly, we already know the thing, so it was like a natural choice anyway. But the most important bit, and that really was the selling point, how I pitched it to my boss, is we can... Ubuntu is a special thing where with every new release, they basically take the core foundations, Messer, Kernel, I don't think Blue Z is in there, but generally the core pieces and they backport them into their LTS release as secondary packages. So you have to explicitly go, we want to use this, but you have the ability. So we don't necessarily have to roll the entire stack up to the next version. We can just switch the foundations if the need arises. So that's essentially the plan. If it doesn't work out, then we will have to go with this very unfortunate choice of having to go to a next Ubuntu release, which we kind of want to avoid, but if it happens, it happens. Why not just go fully rolling? Pardon? Why not just go for a fully rolling distribution? Trust... I mean, why not partner with a fully rolling distribution? I guess it would be another question. I'm trying to avoid saying which one I think you should partner with. There are two problems. First of all, as I was saying, we didn't want to mess with software that we don't care about. You're going to have to do that anyway. For the foundations, yes, not for the other stuff. But the other stuff doesn't break plasma. Oh, it doesn't, but if we... When was the last time known broke plasma? Then why is it working in tumbleweed? So, the latest GTK release once again broke the theming, which completely broke all the integration of breeze into plasma. It's not broken in OpenSuser because the OpenSuser devs found a patch to breathe to incorporate. So yes, in OpenSuser it's not broken. Yes, in KDE, upstream it's currently... I think it's still broken because we released, obviously, before the GTK release and cannot fix it in the stable release because it's a feature release. So that happens, but beyond that, you always... At times you find yourselves that GNOME Bluetooth, the GNOME Bluetooth stack once blues free and the KDE Bluetooth stack once blues four. This is actually a case that occurred. In Ubuntu the choice was we're going to patch the hell out of the KDE Bluetooth stack because in Ubuntu, obviously, the KDE stack was rated lower. And that obviously pissed me massively off, but it also broke the entire thing. And that's a compromise. On a technical level, yeah, okay, you have to make it work somehow. And we would much rather go, well, screw the GNOME thing and it's going to be entirely broken and everyone will know that it is broken rather than spend time and effort on it. There's nothing wrong with spending time and effort on this. It is greatly appreciated, but that's not what we try to do with Neon. With Neon we very much try to focus on the KDE aspect and if other things break, they break. But in this case, don't you just postpone the problem to the time when you have to rebase on the next Ubuntu because they will have moved forward again? Oh, yeah, sure. At some point you will have to go forward again, yes. But it's easy to do that while moving along than doing a two-year rebase after. It's not. Because it's, I mean, I've been doing this for 10 years and I haven't seen an Ubuntu development cycle where there was not like in the middle of it, they landed a new X and everything blew up or they landed a new GNOME and it wanted a new whatever thing and everything blew up. And yes, but that's, my problem isn't so much that at some point you have to upgrade. My problem is not with upgrading the foundations. Your problems with the Ubuntu development cycle and that's what you've basically... It's distribution, it's how distribution composes software. No, it's how Ubuntu composes software. Because I agree with every point you just made, but it doesn't apply to open Susan, as Martin's already pointed out. Quite possibly it is only an Ubuntu problem. But then why did you base neon on Ubuntu? You wanted to ask, have somebody ask a question? I just did. Right, yeah, no. So I've worked in Ubuntu for 10 years and we have made marginal progress on getting policies to adapt. We have made great strides at times. When we originally started, Ubuntu didn't allow for any sort of feature update and in KDE you find rarely a release that doesn't include features. So you go like, ah, so that policy was loosened and it was loosened again recently, but it's still not enough. You still have, at some point, you have to release the thing and everything sort of has to align and you have to make a compromise on how many things can you actually get fixed properly and properly polished within the timeframe. And that is a general problem. And it is a problem that I don't want to deal with as far as Plasma is concerned. And to another degree, what is really important to us is getting KDE software to users. So you're focusing very strongly on the Plasma on Ubuntu aspect, but to me at least, it is only a means to an end. We sort of need a Plasma to test Plasma and have like, yeah, here's a Plasma or have something to show. The reason I'm focusing on that is if I look at getting the KDE in the hands of the user's perspective as a look and soothes a guy, this is something we've been doing for seven years. We've been putting the software of KDE via KDE unstable via factory, now via Ftumbleweed, but now via Leap in the hands of users at the pace that you're just about reaching now with Neon. So that's why I don't want to really get to talking about that because from my perspective, we've been there, we've done that, we've done it for seven years. We have done it in Kubuntu as well. Don't get me wrong, we have been there, but you still have a distribution, right? And the distribution doesn't include that software. And that's sort of where the problem is. In KDE, we're releasing at such a rapid pace. Plasma has been evolving over now almost seven iterations and distributions adopted maybe half of that. No, we have five six and now five seven. But not in the released version. You have to add. No, we have them in Leap. We do. And so that's my point. I applaud you. I love you all. But yeah. And so that's a bit I can't get my head around with this, but skipping to something else a little bit, I guess. I don't want to go into everything. But the sort of the other side of the whole story is actually the user's expectations and the constant bit of feedback we are getting, and this is actually for both of you, is you're forgetting the users in this message, despite your mess thing of, you know, we're giving them what they want. We have one set of our users who very clearly want everything great and fast and wonder. And we've developed tumbleweed for that because we truly believe that in order to deliver anything quickly, have to deliver everything quickly. Leap we try our best to put KDE on top because we have no choice. The biggest bit of negative feedback we get from Leap users is KDE upstream is moving too damn quickly and forgetting about us. And that's, you know, you do have two very different sets of users. You want to make your software available to users, but right now KDE is forgetting about a huge amount of people. I shall answer this very quickly. Yes, and I absolutely agree, and I think Martin Ozu agrees, there is a disconnect between what KDE developers do and what users want. And if anything, if Neon can do anything good, it is bridging that gap. No, by putting the pain spot right on the KDE developers. Because ultimately, I'm a KDE developer, Neon is a KDE project. If Neon looks bad, KDE looks bad, and all of the KDE developers should care. Stop, yes, I'm done. Goodbye, lovely people.
KDE Neon is a relatively new KDE project, providing an easy and elegant way for people to test the latest from KDE Git, or use the latest releases. It is building binary packages but does not consider itself a distribution. We'll look at the motivation behind KDE Neon, the involved technologies and services, and it's place within the KDE community as well as the ecosystem at large.
10.5446/54572 (DOI)
I'm happy to be here. I will talk about reproducible built ecosystem, about why it's useful and what we've been doing. I've been doing, I'm using Debian since 20 years. I started with SUSE because SUSE came with a book in the mid of the 90s. I've done lots of things, I've done Debian QA. That's where I met Bernhard five years ago, where he told me about Open SUSE QA. I was funded by the Linux Foundation to work on reproducible builds together with Luna. We have applied for new funding, which is in the works, and I really don't know SUSE. If I make mistakes, please forgive me, Bernhard will explain the SUSE parts later. It's also, I present the work, but what I really present is the work of all these people. I'm just one of them. There are many people who have worked on this, which is also now, we have a Jenkins set up for the test. These are the contributors to this Debian Jenkins, but all these red people are not from Debian who contributed to this Debian thing to do reproducible tests. It's really a cross-distro project by now. I'd like to know a bit about you, who of you is contributing to free software? Yay, thank you. Who has seen a talk about reproducible builds already? Also some. Great. I'll start a bit with the motivation why we do this. Basically, I refer to this talk from Mike Perry and said, shown at the Congress, CCC Congress in 2014, where they really described the problem in very deep detail. I'll just give some highlights from the talk now. I recommend watch this talk. It's really, really good. Reproducible builds 2014 CCC Congress. They had an example of a remote route exploit in SSH where the difference was only one single bit in the binary. There was an error, the comparison was greater and it should have been greater than equal, and the difference in the binary is one bit. One bit in 500 kilobytes decides whether you have remote route or not. You cannot find problems by just looking at the binary probably. They also had a live demo where they modified the sources in memory when compiling the sources while the source on the disk was still the same. So you look at the source, the source is perfectly fine. You build the source and you get a Trojan binary. They have done this as a live demo at Congress. This is doable. And there's financial incentive to crack developer machines. You take a developer and you don't care about the developer, but the developer ships the software to millions of users. So by attacking the developers, you can hack people. You can make money, lots of money. And securing the computer. It's not only about that the computer has to be secured today. It has to be secure all the lifetime. With physical access, it's very easy to hack a computer. So you can not really be sure what's running on your computer and even less on a built network. And so open build system from SUSE is a very nice target. You attack it and you immediately own millions of people. And it's not really expensive. Like paying five or ten million dollars is not much money for a state sponsored attacker or large criminal organizations. If you want to or whatever the German government uses it, then ten million dollars to attack the German government is nothing. So watch the talk from Congress if you still don't think this is user-full. Or that another example, the CIA had from the Snowden documents, they had a design of compromising SDK which developers download so they can attack the users. The CIA described in the white paper how they would do it. And you can say, yeah, the CIA might not do it, but this has X-coast goals. It happened last year where somebody put for the iOS SDK, they compromised it, put it on faster servers in China where the download speed was better for the Chinese developers. So they used that SDK and 20 million applications were trodden. So this is the problem we are protecting against. Because the problem is really free software is great. You can share it, modify it, use it, pass it on, but nobody uses the source. We all use binaries. And there's no way to see what the binary comes from. All the freedoms from free software go for sources, but we all use binaries. So our solution to this is that we promise that anyone can always regenerate the except bit by bit identical binaries from the same source. If you can do this, then you know the binary comes from the source and looking at the source makes sense. And we call this reproducible builds. It's not in the open build system. There's reproducible in the way that you can repeat it. And this is the same. The Debian bug tracking system has the same reproducible bugs. There are things which you can do again. But when we say reproducible builds, we really mean this bit by bit identical. That is reproducible in the sense I'm talking about. And there's a demo. So this now builds the Debian package five times. Just building it takes 20 seconds, I think. And you will see at the end there's double checksums and the checksums will be different for the binaries. So here are the checksums, the hashes, and they are all different for the depths. The depths are the binary packages. The others are the sources. And if I repeat this now in a reproducible... Very true. This compiles the exact same sources five times with some modifications we made to get reproducible binaries. And you will see the end result. We have five Debian packages which have the same hash. Here. The hash is identical. Do you have a... Can you hear me? Am I speaking too fast? Acoustics is quite bad standing up here. So this is what we want. We want always the same binary packages. They were built five times and they're the same. And we think this should become the norm. And we want to change the meaning of free software, that it's only free software if it's reproducible. Because else it's software. But I think free software should always be reproducible because you can only be sure that a binary comes from the source if you can reproduce it. Else you need to believe somebody. And believing is for churches or something. So this idea is really, really new. There were some discussions before the year 2000s and maybe one or two projects that did it in academia but nothing really took off. In Debian there was a mail in 2000 on the developer list in 2007 where somebody said we should do this and people said this is not possible. And then in 2012 was Bitcoin and tour browser made their software reproducible. The Bitcoin people were afraid. Bitcoin had a market capitalization of four billion dollars and they were afraid if a Trojan binary would show up and the money would go away to some Bitcoin wallet that the developers could not say it was not us. It was somebody else who did it. So they made their software reproducible so they could prove we ship what we say we are shipping and tour browser for similar reasons. In 2013 both Debian and FreeBSD started working on this. The FreeBSD efforts were largely unnoticed in a wiki. Debian and Lunar made some talks which gathered more people interested including I became interested at the end of 2014 and set up reproducible Debian net which is just a test set up which I will explain shortly. And then in 2015 this really took up. We gave lots of talk. We started to go away from this reproducible Debian net web page to reproducible builds org. And we had a meeting in Athens with from 16 projects. I think the only major projects who were not there were Ubuntu and Zuzel. There were Fedora or the BSD, OpenWRT, several Mac ports. And we hope to have a meeting. We invited Zuzel but the Zuzel people we reached out didn't have time. We will have a meeting this year again and hopefully some of you will be there. So what we have now, the talk is now a bit, it's a huge complex topic so I'm starting with what we have. We have now this web page, reproducible builds org where we describe the concept, describe common problems, common solutions for this problem where projects are listed. I added Zuzel as a participating project today because of Bernard's talk mostly. And this is really the URL you should remember. Everything is linked there. We used to have a Debian wiki with lots of information but we moved it from the Debian wiki to this web page. And we have test reproducible builds org which is the test setup which is Jenkins. We are continuously testing Debian unstable testing and experimental on AMD64, i3, i6 and RMHF. We are also testing OpenWRT, core boot, net BSD, free BSD, Zuzel, and fdroid is not really working. And testing in this case means we build it once, we build it, then we modify the environment and then we build it again and compare. This is the testing we do. So we build twice basically. And we build about 10,000 packages a day twice. We have 300 Jenkins jobs running on 30 hosts. It's mostly Python code and bash and it's 30 contributors to this Jenkins setup. And the result is static web pages and adjacent. So I just spoke with Bernard before the talk. Also if Zuzel would not use this, if you just feed us adjacent, we could integrate it into the same web page so that we have all reproducible projects into one. Why this is useful, explain in a second. We have lots of resources, 300 gigs of RAM, 100 cores, thanks to profit bricks, we are really nice sponsoring this since four years now. And we have a zoo of arm nodes, 20 small boards, banana pie, raspie, opax, whatever, which build this arm stuff. And we'll get arm 64 boards this year, we're waiting for the hardware. So when we build Debian, we do these variations. So we vary the host name between the builds, the domain, the time zone, the locale, the user name, the dash, the shell, the user shell is also very, we vary the kernel. We are working on varying the CPU type as well. On I386, we once built with a 32-bit kernel, once with a 64-bit kernel. The file system is also important because the order read-deer order is not deterministic, it differs from the file system. And we differ the time. So we have one build builds today and the other is running 400 days ahead, some more the year, months and days different. And these are the variations we have for Debian. For the other tests, we have a bit less variation because it's just work to do. And it's a lot of me doing this. And I'm working on eight distributions a bit too much, so I always ask for patches and people helping me. It works with FreeBSD, OpenWT, it works nicely, with others not so nice. And we, yeah, I said this already. So the problems we found, it's mostly time stamps. It's really time stamps, time stamps all over. And it's not so much time stamps from the compilers, but more time stamps from documentation system. Every documentation system thinks it's a good idea to put the build time stamp, or most of them build time stamp in, and often these time stamps vary by the time zone, which is really annoying if you want to build in different time zones. I'm not sure if you want to do this, but we do this. Or time zone is the other common thing, and locale. Locale also end up in the build. Hashes are sorted differently by builds. The sort order is different by locale, different languages sort things differently, not the alphabet, but the other letters. So this all goes into the build. And everything else, but everything else is maybe 10% of the cases. It's mostly as really simple stuff, but it's just a lot of stuff. And Lunar gave a talk at the last GCC camp, where he gave 30 examples of common problems. Like G-SIP normally has also put locale, I think, in there, and you can just use G-SIP minus N. And all these tricks, what to do are in Lunar's talk from there, which has really good examples or in our documentation. We also, maybe I should go, no, I don't go first. Let's start with this one. And then we wrote a tool, a diffoscope. The tool we used to analyze the difference between two builds. So it recursly unpacks a depth, which includes an R, and inside VR are files, whatever. There's a PDF in there, so it goes into the PDF and finds the PNG in the PDF and goes recursively. Just comparison presents it nicely in HTML, and it falls back to binary comparison. It's available in all major distributions. I'm not sure whether it's in SUSE already. It's packaged for SUSE. And you can, Diffoscope.org is the main webpage. And this is how Diffoscope looks like. On the left is the first build, on the other is the second build. And you can see there's a version number leaked in there in the bottom, and it will really nicely show it. You can go to trydiffoscope.org and just try it. And in the beginning, Diffoscope was for Debian packages, but now you can give it two objects. You can give it two RPMs, two CD images, two directories, two PDFs. You can give it two things of the same type, and it will compare it. So Diffoscope is also useful, which is a byproduct. If you have a new version of something and you want to see whether the difference in the binary is what you expected it is, so you can also compare two different versions of it. And Diffoscope is just a tool for debugging. If you want reproducible, it means bit by bit identically, so you don't analyze the difference. You just hash off the binary of the object, and if the hash is not identical, it's not reproducible. And we only care for debugging using Diffoscope. But really for the question whether it's something reproducible or not, just use chart 256 sum. These LEDs are still hot. So the main problem we have are timestamps, and build timestamps are usually not really useful for the user, because if you can build it at any time and you get the same result and the only difference is the build timestamp in there, it's meaningless. So we came up with source state epoch, which is the variable which holds the last modification of the source as an epoch since 1970, and this can be used instead of the current date, because this is really what matters from when the source is. It can also be used to feed random seeds. And Debbie and we set it from the last Debbie and change log entry. In other cases, it's been the last modification of the Git repository or whatever metric there are to set it. But there's one thing more. And because the other thing which you also need besides the date is the environment in which it's built. So what's when you put it in the past, when you put a build date in there, you also wanted to make sure I built it on this date to express these libraries were used. But we are recording the build environment anyway, so this is also not useful for us. And we had some success getting source date epoch accepted by other projects. GCC is now using it for date time macros. C-Lung, we also have a patch. There's some documentation system have it. We have patches for RPM. It's the previous system is using it now. So source date epoch has been adopted, I would say. We wrote a specification. The specification is two or three kilobyte text. It's really, really short. Which defines how it's defined, how you should set it. You can go to this spec URL and read it. It's a spec. So what we did in Debian, this is the graph some of you might have seen in some point. And as of yesterday, we're 89% reproducible in Debian testing. And in the 64, the green ones are the reproducible ones. The orange ones are the unreproducible ones. And the red are failing to build from source. And the black's are not ready or the package is not for the architecture or something. So in testing, we are even over 90% reproducible packages now. Which sounds nice, but it still means that there's almost 3,000 packages which are not reproducible source packages. And we categorize issues when we find them. So we have a git repository, notes. Where we have 206 different issues. And I just checked out of these 206, 93 are timestamp related. And 39 are locale and the 70 others are, I don't know. So we have 3,260 notes which are some issues we found in packages. And we have 1,800 unreproducible packages in SID, but only 200 without a note. All the others we already looked at and put a note in there describing what the problem is or maybe some problem. And the same for the packages which failed to build from source. We maintain this in a git repository as a simple YAML file. And at the moment, it's Debian only, but we've just made the specification how to change the syntax so that we can have cross-disclosal notes. So I guess the next will be FreeBSD, that FreeBSD puts in the notes. Because many issues are the same in different distribution. There are some which are specific to Debian, specific to FreeBSD, but most of them are the same because it's the upstream problem. So we want to merge this so that we can benefit from each other's work. So these are examples of issues we found. And I really picked them randomly. And as you can see, it's timestamps, timestamps, timestamps. And the other is fails to build from source, uninvestigated test failures. Because we constantly build Debian, we also find lots of, lots of bugs packages failing to build from source against newer libraries. We have a category for this. So far, we've filed 3,000 bugs, I think, and half of them are fails to build from source, more than half of them. And we've filed about 1,000 bugs about reproducible issues with patches. More examples. Timestamps in documentation system. Randomness in ICC color profiles. Who would think of that? So for Debian, you can just go to this old URL. It also works with the new one, slash source package name, and it will show you how reproducible it is. So if you go to reproducible Debian net slash Firefox, it will show you Firefox, the whole 50 megabyte or what it is, is bit by bit identical. If you go to Linux kernel, you will see that Diffascope has problems understanding the diff. But you can go to any package, MySQL, whatever. Have a look. And because Debian has 24,000 packages, these are too many, we have package sets. So we have 42 package sets now. So we have Required, which is the base system in Debian. We have Build Essential, Key Packages. Another package set is all packages which ever had a security issue. It's another thing. We have KDE GNOME. We have all OCaml, all Java, all Node, all whatever packages. Haskell is also nice. Perl is good with over 95 reproducible. And Required, we have 10 packages missing where we have patches for. Another interesting thing is the Debian Key Packages. Key packages are the ones running on the Debian infrastructure and being used to create the CDs and are on the CD. And there's 3,500 key packages, so quite a lot. And you see it's a bit less than average because the good average comes from huge package collection like Perl or the R packages, which are all reproducible so they blurry the statistic a bit. And the other problem is 86% sounds cool, but 437 packages to fix is really lot. Because out of these 400, probably 40 are really hard and 4 are insanely hard, I would guess, just by guessing. So there's 400 uploads to achieve that and still 40 are real hard problems. This is the swap package built. I can repeat, but you can also repeat. Hello. Just understand the graph. So this means the key package is if it's on your infrastructure and it's green here. Key package builds and testing. Key packages just a specific set of packages which are key to Debian. And these are all source packages. And in general we have these package sets which have some areas like all GNOME packages. I'm wondering what makes it green here. Does this mean that this is already be deployed from within the distribution at this? Well at the moment we are only doing QA. I'll explain that in a second. We are not really in Debian yet, but we can get these results in Debian with three patches, basically. And my point is just here that there's 400 packages, doesn't sound too much, but it's 400 packages. So it's of other. My point is 86% sounds great, but 400 packages is still a lot. And this is the Debian bug tracker. So we only fired 1600. These are the ones without fails to build from source. So these are the bugs which are reproducible issues. So it's 1600 bugs we fired and roughly 1000 are fixed and 600 patches are still waiting. And then we fired more than 1000 other bugs which are not in this graph about just common fails to build from source issues. And we try to always file bugs with patches because at the moment it's just QA and it's just wish lists. So we just say, yes, the patch, could you please apply it? We don't file a bug. This package is unreproducible. We only file it with we have a patch because if it's unreproducible, it's visible anyway. So what we did to achieve that, we agreed on a fixed build pass because many compilers embed the build location in their products and Debian historically builds in a random location and we made it fixed to fix this. There's now a patch for GCC. The GCC creates the same objects in arbitrary passes, but other compilers don't do that. So we'll have to stay with that for some time. We record the build environment and build info files. I'll explain that in a second. And we wrote strip non-determinism, which is a tool which recognizes timestamps mostly and removes them. If they are newer than the source state epoch because then they must come from the build. Bernhard has also packaged strip non-determinism for Zuzer now. So you could use that. We have Diffascope as a tool to analyze what the difference comes from. Distorder FS is another testing tool, which is a user file system which returns random order of read deal. So you can test, use this for testing and see whether it builds with different file system. And we have now two packages modified in the archive, which is Doxygen and Deeppackage, Deepik AG. The rest is pure Debian. So we are not yet in Debian fully. We have these two packages modified. We hope to get there this year. So reproducible builds demanded defined build environment. And it's mandatory that it's possible to recreate this build environment because if you have different tool chains, then it's sheer luck whether you can recreate the same binary. It might be that a different GCC version creates the same object, but maybe not. So you can only be sure if you install the exact same dependencies. And so we created these build info files. Verified this works for Debian with this build info files. I know that Koji from RPM is designed also to be able to recreate the exact same build environment. We have not verified this and the Koji developer said we need documentation for that. Guix or Geeks is another distribution kind of thing where this was a design goal and it works for them. And I'd like to hear other stories about how it's done in other projects. To explain this, build info file has the source files and the check sums, has the binaries and the check sums and the collections of the install dependencies. So the idea is to take this build info file, recreate the exact same environment and then rebuild the binary and get the same results. For Debian, we are lucky because everything which was ever uploaded, even if it was only uploaded for half a day, is on Snapshot Debian orc. Snapshot Debian orc has 20 terabytes or something and it has everything. I know not every project has everything, but I'm doing the Debian work. But it's clear that we have to solve it and it's also the other thing is, I'll leave this out. So build info file is in the Debian case just the RCA22 file, has a format, has the source package name, binary architecture sources. And here the depends, in this example, the depends don't have check sums, they will get check sums. Elsewhere we have not built info RDebian invention, but it's clear that other projects needs the same. I would recommend to also call them build info files, there's a different contents of the format, but the principle will be the same. And it's clear that it needs to be done because you need to describe the input you give to the system and the output to be able to compare it. So the build info files are the ones which users later download and can use to take the source and see if the binaries they create is the same. So what else we have done? We write a weekly report since May 2015, so we had report 60 just published, which is the progress in Debian, but we also now include free BSD and upstream things in it. We had the summit in Athens already talked about it. We have another one this year, again in Europe, maybe in Germany if it's in summer. And last year we had two Google summer of course students, this year we have four Geeshaw and Outreach students, really good contributions, it's really nice that they exist. So Debian policy, this is where we want to go. Sources must be reproducible binaries, but we'll hope this will happen after stretch and stretch this 2017, so this is I hope 2018 or 19. So now we want to have this shall create reproducible binaries. And this is really just a proof of concept at the moment. The moment Debian is still 0% reproducible. It's just three or four patches, but these patches are not merged for the freeze, we need to merge them now. And also then the problem is Debian doesn't rebuild the archive, so when we have these patches inside we still have 0% reproducible because everything will need to be rebuilt to create these built-in for files so other people can confirm this. So without built-in for files there's nothing. So we'll see. I hope that Debian stretch will be partially reproducible in a meaningful way, whatever that means. But then there's also the other thing, how to rebuild built-in for files signing user tools. This all still needs design and code. I'll explain a bit more in a second. But what time do I have? So the first thing we did was Qooboot. Qooboot is a free BIOS and it's now 100% reproducible with the CBIOS payload. So Qooboot is a BIOS thing and loads some payloads and with one with a usual payload or 250 something different Qooboot BIOS are 100% reproducible. The problem is Qooboot doesn't release binaries, so it's not clear what to do there. OpenWRT is also quite good. The patches were upstreamed and then OpenWRT decided to renew itself into this lead project. So that is the stalled there. NetBSD is also there. Some patches accepted and this Thomas Klausner who did it was busy with other things so it's stalled there but it's partly doable. FreeBSD, the base system, the base system is the FreeBSD basic user land. It's 250 megabytes and there are three or four bits which are not reproducible but they have patches. In 2013 somebody already did a test with their ports which are packages and they had 63% reproducible. But then they stopped working on it and only last autumn it must pick this up and it will also soon build all the packages or the ports on this test setup so we'll have other numbers and more cooperation there. Fedora, I set up simple tests of Fedora but the RPM patches were not there and I was too busy with other stuff so I left it there. I know that the RPM format includes the build time and the build host and the signature in the format so they need to be set to Nile or other values. Bernhard has some solution there and yeah, I hope that I can take some patches home. And yeah, Arch Linux, FDroid, I leave it there. There's too many, the Zuzer starters you will hear very soon from Bernhard. I'm very much looking forward to that. And there are more projects with known activities. Bitcoin and Tor explain signal also a month ago made a blog post a tweet saying they were reproducible. Ubuntu contacted us but Ubuntu is waiting for the state package patch to be merged. NixOS, ElectroBSD is a fork from FreeBSD which is reproducible already. Cubes, tails, sub-grapes, they are all looking into this. And there's also commercial proprietary software which is doing this. Which is really funny. Guess which? Is it Windows? The source code is available. Medical devices in your body, arms, critical infrastructure like power plants, cars, gambling machines because the state collects taxes. That's why. And I think the other things, medical devices, self-driving cars, power plants, reproducible builds would be really a good idea. Okay. I don't know about OpenBSD and Zen2. So about the future work, the problem with these built-in profiles, we need built-in for files in the Debian case, for 20,000 source packages, for 10 architectures and not all are any, so it's probably 100,000 new files which is 50% file increase of the files on the mirrors which is a problem with the inodes. So it's just amount of files and we need to find, we want to distribute them also. And we need detect signatures and we want several entities to sign it. So I rebuild it and sign the built-in for files saying yes, I could recreate it, you build it and sign it and we need to find a way of revoking also. And also this rebuild as thing has not really been thought how we do it. That's basically no work done. There's, we could maybe think in the Debian case, individual developers sign some things but I don't think this will scale. Another thing we have rebuilt us by large organizations, pick your friends, whatever. And the good thing is we have different entities. So you have the NSA and CCC and whatever. Or we could just do Fedora rebuilds Debian and Debian rebuilds open Zuzer and open Zuzer rebuilds NetBSD. We need to think what is a good solution there. And we need end user tools. Do you really want to install this unreproducible software? Do you want to build those packages which unconfirm checksums before installing and confirm it's reproducible? How many signed checksums do you require to call a package reproducible? And this will differ. And by whom? So we've come a long way but we are still not there. Where we are is that we can probably do reproducible builds now but everything behind is open. It's not even really clear where we need to go and how do we get there. But at least it's a possible road now. Yeah, there's still lots of things to do. So if you want to get involved as a software developer please merge our patches. Stop using build dates. Please read about source state epoch. You can also just what Bernhard did, test for yourself, build something twice, compare the results. There's lots of documentation. We have two different ISC channels. Now we have one Debian reproducible, one reproducible builds but in general come to the Debian thing and we are happy to help anybody with anything reproducible. We are not really Debian focused. And you can also join the existing team. It's really lots of fun. It's a very diverse group working on very different things. Like I should have said this, I've not done any work on these patches. I wrote I think one or two patches doing reproducible things. I just work on this test infrastructure and give talks. And doing patches is done by other people who are not working on the infrastructure. And there's many things to do. So do you have questions? These are the URLs. The question how much pushback we got, not so much. There's quite frequent. But I want my build date. It's important. But then you explain them, yeah, well, we have it's not meaningful, the build date. And you explain it and usually people understand. So I would not say we have much pushback. Rather we have lots of people joining this. And we have a thousand patches accepted which in the context is quite a lot. If I get it, build info is not part of the package. So my question is why? It should be. The question was whether why build info is not part. Build info is a patch we wrote for deep package. And the deep package maintainer is very careful because deep package is used by other projects, not only Ubuntu but many other projects. So he wants to have detailed first and he's really, really slow to accept patches. He took half of our patches already. There's a new release coming soon. And it's also in the beginning we had this build info files recording the information I had on the slides. And we have now come that we also might want to record more of the environment. And so he is very careful. And I think the agreement we have now is that he will include these build info files and they build info files have a format version. And that will be format version, I don't know, 0.1 or something. So that we'll get. But we have been discussing with him since last August. It's June now, so since 10 months. And I could also say I'm a bit disappointed about how fast these patches are merged. But I still hope these patches will get in in the next four months. The problem is in Debian, deep package is maintained by one person and that person is not as fast as we would like. So plan is to include it into the packages. It will be included. Yes, he's also in favor of it in principle. That's just the patch should be nicer here and there. Okay, thank you. You have been asked if the build info should be part of the debt package? Now the build info will be part of the build result. When you build a Debian package now, you get depth files with other binary packages and you get one changes file which describes the build results as a checksum of all of this. And so in future when you build, you will get depth changes file and the build info file. I was just pulling out your answer to a different question. I know you cannot do that because then you wouldn't modify the binary file I think. What was? Yeah, of course. The build info file describes the binary so it cannot be part of it. No, it has the checksum of the result. So if you include it, it will modify the checksum. Yeah, so only checksum of result is tricky part. Let's solve this later at the bar. But it's technically not possible to include it. Yeah, yeah, final checksum is right. We also have a Twitter account now for those into Twitter. You can follow us there. Fuck Twitter. They are awful. I would like to point out that in RPM packages, we also put the checksum of the package itself into the package. So it definitely does work for some definitions. Well, that's the checksum of the content in the RP. It's not that it doesn't include the other checksum, but we can really... In the package, you only... In the Debian file, you have two separate files, just like the control part and the data part. And you could just build the checksums of those. No, because we want the binary result should be the same. But what we can do in the RPM case, which includes the signatures and which are private signatures. If you want to reapply them, you can just use the detached signature and put them against. But you cannot do the other thing. I'm happy to discuss this later, but I'm sweating and I would like to get off. Okay. Thank you. Thank you.
The presentation will describe how the Debian reproducible builds team made 85% of the Debian archive reproducible, what steps are left to reach 100% and what steps are needed beyond reproducible builds, so that every user can easily and meaningful benefit from them. While the presentation will be largely about the Debian work on the area, it will also portray many other projects collaborative work on reproducible builds, as our goal is to make reproducible builds the norm for free software: "It's not free software if it's not reproducible."
10.5446/54574 (DOI)
Okay. So welcome to my talk about improving the quality of KDE Plasma with the help of Veyland, this new windowing system which you can see in action here. Our motto at KDE is a little bit, Veyland will fix it. If we have a problem, we know Veyland will fix it. Veyland will fix it, Brexit, Veyland will fix it, World Peace, Veyland will fix it. So if there's a problem, we have a solution. So short about me, I'm Martin Kresselian, my email address is I'm Kresselian at KDE.org. I'm working for Blue Systems and I'm sponsored for working on Quinn, KDE's Plasma window manager and especially on the Veyland port. A little bit about Plasma. Plasma is KDE's desktop environment. KDE is not a desktop environment. KDE is a community. And Plasma is the desktop environment of that community producers. We are currently at version 565 and are about to release 5.7, which I've heard will be the default desktop environment in the next open to the leap release. We are currently at a three-month release schedule. We used to be at six months, now we are at three months. So every three months there's a new release. You can find us on hash Plasma, on IRC, or on our mailing list. And yeah, it's an X11 windowing system, but it's currently in the process of getting ported to Veyland. It's actually working quite well already. So a little bit about what I want to talk about in this presentation. First of all, I want to talk a little bit about our quality process in KDE in general and also in Plasma. Then I want to talk a little bit about the problems of testing X11. Testing X11, if you don't have OpenQA, I think OpenQA makes everything much, much easier. Firstly I want to talk about my area of work, how to test the window manager. And then I want to talk a little bit about how Veyland changed the world, how we can test in Plasma, how we can test our window manager, how we can test our desktop shell. And then a little bit about the future plans, how we plan to extend all that, what we've already implemented. So let's start with our quality assurance process. First of all, I'm taking a look at frameworks. Frameworks are the successor of the monolithic KDE lips in KDE 4 times. It's nowadays 70 independent libraries built upon of Unqueued. We have separated them in multiple tiers with different dependency rules. So we have tier 1, which is only depending on Qt. When we have a tier 2, which depends on Qt and tier 1 frameworks, and tier 3, which can depend also on other tier 3 frameworks. But of course, no circular dependencies are allowed. We are at a monthly release cycle, as a monthly feature release cycle with no dedicated bug fix releases. And currently we are at release 523, which means at a monthly release cycle we are doing that now for two years. Now from a quality point of view, this has quite some implications. We are in constant feature release cycle, which means we cannot have bugs. We are not allowed to introduce a bug in a framework, but just doesn't work. It also means we cannot tolerate half-baked features. It's not like, oh, I want to get that in, but it's not working yet. No, you cannot do that. It has to wait till it's ready. We can only integrate fully functional working code into frameworks. And by reducing the development cycle to one month, we took away the pressure to try to get something into a release. Because if you don't make it, the next release is just a month. Previously it was a six-month cycle. It meant, oh, if I don't get it in now, then it won't go into this KDE release, which is then picked by the distributions. So if I'm missing half a year, it means sometimes one to 1.5 years till it gets to the user. And that was a huge problem. And of course, we have policies in place for frameworks. We have the policy that our code must have auto tests. It must maintain binary compatibility. If you've ever looked for rules of binary compatibility of C++, I've heard that KDE has the most comprehensive list of what is allowed and what not. Because KDE libraries do not only work on Linux and provide binary compatibility for Linux, they also provide binary compatibility on Windows. So for the Microsoft Visual Studio Compiler and OS X. Of course, our commits are peer-reviewed. And if our CI system has a failure, it must be treated as a stop-the-line event. So till the failure is fixed, no other commit may go into the framework. Now a little bit about the code review process and KDE. First of all, I have to say that every KDE contributor is allowed to commit to any repository of KDE without prior code review. Everybody has commit access to everything. This means we cannot ensure on a technical level that we do code review. It means we can only have a social contract. And that's mostly kept. Sometimes it happens that people commit without review, but then they have to expect that they are sheltered at. The main website for code review is currently still reviewbot at gitreviewbot.kde.org that got originally introduced by Plasma, but pretty quickly picked up by all other projects. It's used what frameworks is using for code review. We have also an old instance from the SVN times. But we are also currently migrating to Fabricator. Plasma, again, an early adopter, is already using Fabricator. And it's looking really good. I'm used to both now, Fabricator looks a little bit nicer, seems to have a little bit better integration with the overall workflow. So that's a good thing. We also evaluated Garret, and the developers were not so happy with it. But we are also used to Garret because Qt uses Garret, and most KDE developers are also Qt developers. So that's also quite common to use. Then once we have our commit peer reviewed, it gets committed, and then it goes to our Jenkins instance. There we have a build job for every master and stable branch of every repository, which is quite a lot. And whenever there is a commit, the one is triggered, which checks the build, which runs the outer tests, which are in the repository. And if something does not work as expected, it reports back in annoying ways, which can be IRC messages, it can be email, RSS feed, whatever. So I think you all know Jenkins. And what's very nice, we have multiple profiles in the build system. So we can say, OK, I want to compile this project twice with different compile flags, so that you can also cover that. Currently we perform the following checks. We compile on Linux. We do not yet compile on Android. And we have currently, after the run last print, which we had last week, more platforms in preparation. So I've heard that Android should be added. There's talk about Windows. Of course, not everything will work on Windows, so Quint will never be compiled on Windows, but that's fine. We have the outer tests run. We get summary of the compiler warnings. We have asan features, which check for heap use after 3D detection. So if we access memory, which already got deleted during an outer test, it will just abort and we have a nice build error, not a build error, a test error. We get code coverage, and for the projects which use upstream, there's also some tests performed. I think there are for some projects a few more things which are performed, but as I don't use that, I don't really see that. Then in addition, brand new, we also have a continuous delivery system. That's Neon. So it does lots of the things that build KDE.org already does, but it also produces packages and builds a daily ISO, and with that, we can do actually test an integrated thing for a change, which is not just the code you randomly have on your system, but you can actually test. Similar to that is now what OpenSUSE offers with argon and corpton that goes in the same direction, but yeah, Neon as a KDE project directly, so a little bit different target. So now let's talk about testing X11. So my experience from working with Xcult for quite some time is that you cannot mock X11. If you want to have a unit test which accesses X and you want to mock X, it's just not possible. There are projects which have done it, but my experience is it's too large. So the expectations are something like 160 pages. In addition, you have extensions, which replace core functionality, extensions which replace extensions, extensions which replace the previous version of the same extensions, so that's a lot. Then we have on top of that the ICCCM with another 60 pages of describing how an X window manager and the communication should work. We have the extended window manager hand and on top of that we have actually two libraries to mock XCV and XLIP. So mocking that is really, really difficult. In addition, we have dependencies which pull in our X without us even noticing, especially back in the KDE four times. We had Qt which just linked X. We have KVNOS system which linked X. We have OpenGL which is defined to link X on Linux. And of course any library using any of them. So somehow you always get X and somehow you will always miss to mock something. So my experience was when I always when I looked at it, mocking is not possible. We have to perform integration testing if we want to get anything tested. So the normal integration test setup for X11 looks like that you run your X virtual frame buffer. On top of that you run OpenBox as a window manager and then you use the X test extension to simulate input devices. The problem with that is that X virtual frame buffer is rather limited and way too restrictive. Restricted the same for X test. With X test I can simulate mouse clicks and key presses. I cannot simulate things like smooth crawling, touch events, etc. So X test is somewhere on the level from 10, 15 years ago the more modern things were never added. With X FVB the biggest problem I noticed was the complete lack of GLX. So I couldn't run OpenGL in a sensible way on top of it and X render was missing. So in the latest version of X render we finally have X render. So that finally got fixed but it's still really, really limited and not usable for proper testing because it can only do one resolution. We cannot simulate the removal or adding of screens. We don't get a physical size so anything which wants to calculate a DPI just won't work. We don't even have a refresh size. What did I write there? A refresh rate. So it reports a refresh rate of 0 hertz which is also far from the reality. So I was surprised when I saw that it's just X render. When I then looked at it I was disappointed again. So why is simulating multi-screen so important? The big problem with the X render extension is it's not atomic. You get changes in a non-atomic way. So for example you get a render event when you plug in a new screen. Then you have a demon reacting on that because it wants to set a proper resolution and a proper layout and you get more render event. And whenever you act on a render event before you get the last one you are operating on an intermediate state and the state you will produce will be wrong because the new ones are already queued. And that's a little bit of a problem especially as we don't know when we will get the last render event. So in Quinn we introduced a way like when we get the first render event we start a timer with 100 milliseconds or something like that and on every render event we restart the timer and we don't do anything till the timer fires. So at that point we know okay we got the last render event and so Quinn never operates on an incorrect state. We either have the old state or the new one but not the ones in between where we would do stupid things like removing screens and then your window string or trying to render on an output which doesn't exist. So that worked very well for Quinn over the last years. So we have hardly got bug reports about that this actually did not work. And of course the time spent doesn't really matter because mostly the screens are mode setting and they are black anyway so you don't see anything on that. The mode setting takes much longer than our timer. So now we want to do a little excursion to Plasma 5 and the multi-screen problems. So you might have had the experience when you tried Plasma 5 that multi-screen didn't work. So how did that happen? What went wrong? After all in Plasma 4 everything was fine. The main problem is that Qt introduced a new Q-screen API and then they decided to bound a Q-screen to an X render screen. And in addition a Q-window which represents a window and a screen belongs to a Q-screen. And what then happened was a little bit unfortunate that if a Q-window loses its Q-screen the platform window gets destroyed. So we had situation where the intermediate state meant we have no screens. And now the platform window gets destroyed and recreated. Okay, that's nasty but still okay. The windows just jump around. We also had situations where the platform window gets destroyed and then we have a null point at irreference and then the application crash. That's probably what most people saw applications randomly crashing when you changed the screens. That all happened inside Qt's code. It was never an error code. It was all inside Qt's code. Unfortunately for us I still see crash reports with that. They still come in with users of old Qt versions. Not open to use it but mostly I think Ubuntu is affected by that. And what I also learned unfortunately that when all platform windows gets destroyed the app exits as all windows close. So the default behavior of Qt is if all windows close the app exits and that could be triggered by just removing all screens. And that even caused a lock screen bypass by disabling all screens. That was something OpenSUSE actually found. OpenSUSE user reported. You turn off all screens and the lock screen is bypassed. Lock screen Qt 4 to Qt 5 regression because in Qt 4 we never had a situation that we don't have any screens. In Qt 5 it could happen. So yeah. Overall the situation was very unfortunately because it could take the complete session down and we basically looked at it and had no chance to do anything about it because the problems were not in our code. KDE people went to Qt and fixed the code but it took time until the new releases came out and the fixers got in. And also due to the crashes we actually didn't see the bugs in our own code because they were covered by the crashes like the lock screen bypass. It only happened after Qt fixed the crashes because before the lock screen would just crash and restart that was a condition we actually handled well. We just were not able to handle the situation that all screens go away and the application exits. So the current state is that with Qt 5.5 especially 5.1 most of the null pointer dereferences are fixed inside Qt. In addition since Qt 5.6 a dummy Q screen is created if there are no X render outputs. So the situation that there is no screen no longer exists. But there are still a few problems we still see that Windows might get destroyed when an X render output is destroyed and we still see that sometimes Windows jump around because of that. And in my humble opinion that's unfixable because it's broken by design. A Q window should not be bound to an X render output. That's not how X works. On X you always have a kind of screen even if you don't have outputs. Windows are bound to position in the virtual screen and not bound to the position of an X render output. Yeah and now back to the topic. Our experience over the years was that unit testing X render code is not possible. We wanted to do that for a long time but actually it's not possible and it kind of relies on manual testing. So you would have to have an auto test which says and now please unblock the screen and that's not really feasible. In addition a problem are the different drivers. So over the last half year I saw changes in driver behavior quite often. Like I did not change anything with my KDE or acute installation restarted the system and the behavior of how screens are handled by the Intel XOG driver changed. Changed in bad ways like didn't come up at all. If you go into DPM as power mode safe it turned the screen off which is interesting then with the crashes. So that's something which you can hardly test. You need to have tests which also consider the behavior of the drivers. We actually have a few tests which try to do something like Quinn has a unit test which starts a Xafer server with two outputs to at least test that we can handle the condition of two screens correctly. And also for considering tests there are still systems out there which don't have X render. So if you have something like VNC over X you might not have X render at all. And you should also test that because people are not pleased if that doesn't work. So a little bit more about how testing X11 code looks like. So the code in KDE which interacts with X11 is really old. It's from the times where Cooley was still active in KDE. So it's KDE 1, KDE 2, perhaps early KDE 3. So if you look at the code and then look at the changes and you see okay nobody touched that for 10 years. It's basically designed before concepts like unit testing or test-driven development were common or were even known. It's monolithic. It doesn't have units. It's not written with anything in mind which would not be X11. Nobody thought about that there might be at some point valent. But the code works. It's there for 10 years. It's not getting touched. It works. So it's kind of tested by the age. But it brings us to the point how can we actually test it? If we would want to unit test it we would have to refactor the code and that's in my opinion a very, very bad idea because probably you're breaking more stuff by trying to refactor the code than you would gain by any tests. Integration testing might be an idea if possible. Open QA might be a very good idea for such old code because you can then relatively easily test it. So on top of that how to test an X11 window manager. So this is now a little bit describing the situation of when before we started the valent effort. So we had a very monolithic architecture. It didn't have any units. We had one huge class which did everything. It's best described by the anti-pattern called the got pattern. It completely relied on Qt creating the X connection. Also it relied on Qt creating windows which also meant that it cannot manage its own windows. So Quinn is not able to create a window and manage it. So if Quinn creates a window you will notice that it doesn't have window declarations because it cannot do that. So when we went to valent we realized we need to refactor because we needed to get away from this monolithic architecture which is thought about X. And by refactoring it we created units and those could be tested. So thanks to the refactoring we needed for valent we were finally able to unit test our code in areas. But how does it look like with integration testing? Integration testing with window manager is even more difficult than integration testing a normal X11 application because we can only have one window manager on an X server. So we would have to replace the window manager and then if you would run the test on a developer system it would start to replace the normal window manager, run the test and then be in a weird state because you cannot prepare the states. There windows still around from the previous session and then how to get back to the normal window manager. So it's only possible to test that in the context of XVFB. We have the problems I just mentioned before. We don't have GLX which we would really like to have. We don't have X render especially in the times I'm talking about a few years back that just didn't have XS. So overall we also want to have a clean state. If we want to test a window manager we want to be able to say okay we currently don't have a window, no I create one, it should be positioned there, no I create the next one, it should be positioned there. So the clean state is really, really important and that's really difficult to get if you try to do that in a test setup. So basically we would have to start the XVFB from within the test and that's not possible with Qt's test architecture. So overall I had made the experiences that we cannot use the Qt test framework for integration testing QIN but that would be of course the one we would want to use because that's what our developers are familiar with. So we had an idea evaluated in a Bachelor thesis which was to create a dedicated framework for QIN which would start the required XVFB then would start QIN on that XVFB, inject a test script through QIN's JavaScript API and then start helper implications to interact with the window manager and then to hit the conditions in the script. So the idea would be you create a window in the external process, set it to full screen and your script verifies that a window got created and that it's put to full screen. But overall we have never deployed that in production because it's too asynchronous because we are now not only talking between X and QIN but also between X, QIN and the helper application and especially corner cases are not really testable with that. And also when we had that framework evaluated I started to see light at the end of the tunnel how we can get better testing with the help of Veyland. So the next thing I want to talk about is testing KVNOS system. KVNOS system is our implementation of the extended window manager hints. Interestingly it does not depend on Qt's X11 connection. That's really important at that point because it made lots of things easier. It's also not depending on Qt per se. It's having an own implementation. And what's also made it much easier is that it doesn't perform any event processing by itself. So instead you pass it an X event which was property change or something like that and then it processes it. And with the port to Qt 5 we ported this framework from Xlib to XCB. That was a huge change and I thought okay we are going to do that change which is the base of our window manager and we don't have tests for it. No way I'm going to write the test for that. So I went there and wrote test cases. Actually we did the port to XCB twice. Then developer did it, I did it. I wrote the test against my implementation and then we swapped the implementation out and just used my test cases. And what these tests do is what I have just described for how we would have wanted to have Qt tested. It starts its own XVFB to have a clean state and then runs its test. And these are very basic tests like creating a window, setting a property on it and then verify that the property was set as we expected. Unfortunately, it still allows the test methods to leak information into the next test case. That was as I noticed that our implementation did not support multiple X servers so I could only connect to one X server at the time. That's actually also changed because of Valend. And with that I come to this topic. What did we change with Valend? Yeah, first of all, we are really unsatisfied with the testing capabilities of X. Now that we are going to the new thing, we want to fix our testing. So we want to have proper multi-screen support which can be tested. We want to have everything in the state that we can unit test it and we want to have better integration test capabilities than XVFB plus OpenBox. It just didn't work for us. We have no chance to do it properly and we wanted to do it from the right and do the planning with all of that in mind. For our Valend efforts, we created a new framework that's KValend. It's a tier one framework since 5.22. So it's the latest framework to have been added to it. So it was added last month. But it had a previous life in Plasma Workspace. Basically it's a headless Valend server API. So it's an API which allows us to create a Valend server and implement the protocols in a C++ style manner, an acute style manner to be precise, and also to have the contract described in the Valend protocols implemented. That's something like the Valend protocol specifies that if you pass keyboard focus to a surface, the previous one should get a leaf event. The library takes care of that to have it easily for the user of the library to just update keyboard focus. The user of this library is mostly Kvinov, of course. So we went for an approach like we had on X11 that the actual implementation of the things which make out a Valend server is done in a framework. So that other implementations could exist on it. So if at some point we say, okay, Kvinov is not doing it properly anymore, we can write a new Valend server from scratch without having to rewrite everything. What's very interesting about KValend is that it also has a client library. And the client library mostly exists only for testing purposes. And what we achieved is a very, very easy way to completely out-of-test KValend. We can easily create a Valend server with exactly the things we want to have and then have a client connect to it, and we can pretty much test every aspect of it. So at the current point in time, KValend is a library which has about 10,000 lines of client-side code, 12,000 lines of server-side code, and about 11,000 lines of out-of-test code. And we have currently a test coverage of about 87 percent and a conditional coverage of about 65 percent. The conditional coverage is so low because we have lots of asserts, especially in the client code, and those we don't test, obviously, because otherwise our test would fail if we would go into an assert in a test. So overall, that's looking really good if you compare that to the X11 world. And my experience lately is that if we have a bug, we are able to create a test case for it. So that's really good. And also what I'm seeing is if we have a bug, it's in the not yet tested code or not yet test covered code. So the first user of the KValend library was actually Kscreen and not Quinn, because Kscreen started to port to Vailend and wanted to test properly, and it wanted to have a Vailend server which it can connect to, a way how to fake outputs, like adding an output, removing an output. And what's really great about the tests in Kscreen for Vailend is that it can load profiles as test data. So you can describe, or actually you can export, the profiles you currently have, like I have two screens with this resolution and this layout, you export it to test data, and then you can tell us, hey, this one didn't work. It did not do what it should do. And we can put that into our test data and verify that it works. And that also means that we can test the X11 code indirectly by it, because the shared functionality is now being tested through the Vailend code. And what we think we did was doing XVender right. We tried to apply all the lessons we learned from XVender. We applied changes atomically. We tried to get feedback to Kscreen whenever something changes. So if Kscreen requests the outputs to be changed, it sends a request to Quinn. Quinn applies it and then sends back to Kscreen whether it worked or not. If it didn't work, it will revert to what was previously. So it's no longer the, we throw things at XVender and see what sticks. So that's a huge improvement for Kscreen. So in Quinn, we also had a few changes due to going to Vailend. We had a few of the testing blockers removed. We don't depend on Qt's X11 connection anymore. It's now we're creating our own X11 connection and only on X11 we use the X11 connection provided by Qt if you want stand alone on X11, so the normal old Quinn. We need to be able to connect to multiple X servers. That's very important in the test, so in the developer workflow, because we want to have a nested Quinn. And this nested Quinn has to render to the normal X server. That's one X connection and it has to have an XVailend. That's a different X connection. So we actually had to remove the problem we had in the KVender system library that NETWM only supported one X server. We had to remove that so that we can have Quinn probably running with multiple X servers. And of course, lots of the X11 specific code got refactored to be WinNuring system independent and those areas can now be tested. We have created unit tests for them on X11 and also for Vailend. In addition, we created some abstractions inside Quinn to have the rendering and the input handling platform specific. So we currently have a few specific plugins for that. We have an X11 plugin which can be used for nested rendering or as the stand alone. Also the normal Quinn on X11 nowadays loads the plugin for the platform API abstraction. We have a Vailend rendering backend which is also again just nested. We have for direct rendering management. We also have a frame buffer device but that might get removed again. And we are able to run on Android's hardware composer with the help of the Piperus. In, yeah, that I just already mentioned. So yeah, we were able to change the K-Windows system and also adjusted all the outer tests so that every test method now gets its own X server that we don't have information leaked into the next test. And what was really important change was done somewhere October November last year is the introduction of a virtual platform plugin. This virtual platform plugin does not perform any rendering per se. It just renders into a Q-Painter object or into actually a Q-Image with the help of Q-Painter or into a virtual frame buffer on a virtual rendering device. And with that we are able to run Quinn on a server which doesn't have a real screen. And if we are able to run it on a server, we are able to run it in our CI system. So that's what we actually did. We built up a test framework for Quinn which we can use now in our outer tests. And this test framework allows us to start the complete Quinn including X-Valent, including the compositor, the effects, everything we have and just don't render to real hardware, render into virtual devices. And this gives us a possibility to have a complete introspection into our window manager. And from there we can create X-Windows and Valent Windows, try to manage them, verify what we did, like did the strut on a panel get applied correctly. We can add multi-screen like we want. We can add screens, we can remove screens, and we can verify that inside Quinn that all works correctly. So that's a huge game changer for the development inside Quinn because that's something I have dreamed of for years to be able to actually test whether Quinn manages a window correctly. And that also showed already that if he got a crash reported, I was able to create test cases for that and fix it and know it will never happen again because now we have a test case. So overall that means we are now able to do test-driven development inside Quinn. I must say that Quinn had already a very good quality before. It's not that we didn't have any quality at all so don't get that wrong just because we were not able to do test-driven development. But overall over the last few months I've added something like 7,000 lines of test code. The new input handling code which we wrote for Valent is completely under test coverage. This includes that we have all the lock screen situations completely tested. So we actually know from our test cases that it's not possible that input events go to a window if the screen is locked. What's also nice about this new framework is that we can from there start Qt applications which then use Qt Valent and with that we can actually also auto-test Qt code. So if we see a big problem in Qt we can create test cases for it inside our test framework and include that into our tests. We currently, this has a few limitations because we cannot test our X11 only code. So everything which is in our X11 standalone plug cannot be tested by that which means we cannot test our X1 compositor which means we cannot test the GLX compositor and creating X11 windows in the test case is a little bit cumbersome because we cannot use Qt for it. Qt puts the platform plug-in which is not XCB in the case of QNON Valent. So that's a little bit unfortunate but we have the net-wm classes which help us a little bit there. So in addition brand new in the latest framework release we have now a Valent virtual framework for test server. We cannot start the Kvalent based testing which I presented previously in all cases. So if you have a test which would depend on QGUI application or Q application being created we have a dependency loop because it will try to connect to the Valent server which you are about to start. So that cannot work and then we just run into a situation where the application freezes. I have gone through that with my QN Valent porting because I were there in this situation so I know that cannot work. And now with Kvalent 523 we have a very, very small binary which can be integrated with the CMake and Ctest and all it does is it starts the server through a CMake command and then the server wants it has completely started itself, will start a test binary and then report back the result code of the test. So what it supports is currently creating Windows faking input events. So it's on the level of XVFB but we plan to extend that. So with that I come to what we want to do in the future. First of all I want to mention a problem which we have on our buildkde.org. Buildkde.org runs Stuckers containers and there we don't have a DRID-wise and Quinn tries to do EGL initialize without a DRID-wise and that fails horribly and so we are forced to use the QPainter compositor for our tests on buildkde.org but we have actually a few tests which would need OpenGL so those have the Q skipped currently in the code to ensure that they don't crash on the CI system and don't shadow by showing warnings that tests fail and what's a little bit of a bigger problem is that we don't get any Qt Quick Windows to show because Qt Quick terminates the application if it cannot create an OpenGL context. So we cannot start any Qt Quick applications from this in Quinn's test framework which means we cannot start Plasma to verify it. So that's a big problem if anybody has ideas how we can get EGL initialize and Mesa work in a Docker container please tell us because we are pretty much clueless. So what we want to do is extend our K-Vailand test server. There's only one test case so far which uses it but we want to be able to verify everything we use in K-Vindos system also we want to use it for our new task manager library which we introduced in Plasma 5.7 and want to integrate this into further frameworks to have all the tests run twice so we normally have tests which run on X11 and we can very easily but this framework also run them on Vailand and by that just get a higher test coverage for the code and also for K-Vailand. And also what I want to use it is test the hell out of the Qt Vailand client library because it currently still has a few problems and these are annoying me. Also we would like to build up a test framework for Plasma so the idea is to take what we have for Quinn and make it usable for Plasma as well so that we can start Plasma so the complete desktop session in the context of Quinn's test framework and then have an IPC mechanism to verify the internals of Quinn like Plasma creates a panel and we want to see whether the panel actually got created and has on the no manager side the positions we expect. So that's kind of going into the area what OpenQA is also doing so maybe OpenQA will be the better choice that is something we still have to evaluate especially if we consider that we could take screenshot with it where we are getting an overlap so that's probably something to look into what is the better choice here. And with that I am at the end of my presentation. I still have a few minutes for questions so if you have any questions please ask. Doesn't work apparently. Currently in OpenSusidest the current image with Rayland support what would we need to do to be able to test it with OpenQA which basically means getting it to work in QEMU. Good question. I don't know what would be needed to get it running so with the virtual backend you probably would already get it running without any changes with QEMU you probably depends on whether you have GPU which is capable of doing DRM in it. Not really OpenQA, OpenSusidest currently uses the Zirrus backend so there is basically only a frame buffer. You could use the frame buffer backend of Quinn but then you don't have OpenGL and then Qt Quick won't be like that. Well wasn't there the plan to support a software rendering backend in Qt 5.7? Sorry I didn't get it, it was too noisy. I think there was the idea to have a software rendering backend for Qt Quick 2. Right, in Qt 5.7 there is never Qt Quick software renderer but that doesn't work for anything which actually expects OpenGL. So code needs to be adjusted. Yeah that will happen now that Qt 5.7 is out I expect that KDE code will work on that. But yeah if you want to actually test it in OpenQA you would want to test the real thing not just the software fallback which is exactly the problem I have here with the EGL initialized failing. I don't want to test the software fallback, I want to test the OpenGL stack. No you mentioned that you would want to drop the frame buffer backend in Quinn. Would you still do that if it's the only way to get it to work in QE mu and possibly virtual bugs without hardware acceleration? Then for that it's definitely a use case if that's the only way then we will keep the backend. I mean it's not a lot of code, it's something like 1000 lines of code, it's not really expensive to keep it. It's just for me there was the question does it make sense if we have a DRM device which can also be used for software rendering even if we don't have OpenGL does it make sense to have the framework for backend. If there are use cases for it I'm all for keeping it. Okay. We have a question back there. So just a comment about OpenQA and OpenGL and software rendering. Normally we are using LLVM pipe acceleration with Mesa and OpenQA is running usually using the serious software rendering so it means that Qt and Kwin could perfectly run using the OpenGL backend and then we would run using the software emulation of Mesa but at least you would test the OpenGL code of Qt. Only if we get to that point. So that's what we also tried to use LLVM pipe and we didn't get to the point that Mesa actually tried to use LLVM pipe. It fails before. It fails in the EGL initialize call and at that point we are not where it would decide to use LLVM pipe and the same with Qt code actually should work then. But yeah, we never got to the point that LLVM pipe would even show up. It's failing before. Yeah. I guess you need to, I mean we might need to work on that because we are able to use that for the GNOME testing and there is no reason why it wouldn't work for Qt. Looks like there are no more questions. So thank you for attending.
A talk from Martin Graesslin (one of the top Plasma developers) about how to Improve the quality of Plasma with Wayland
10.5446/54575 (DOI)
Yeah, I'm sure some will still be coming from lunch, but I have a few pretty generic slides to start with. So let's start with a second talk that goes into kind of the same direction as Owens, but from another perspective. First of all, who am I? Father of four, one of them is in the room. I can't really code. I've done a lot of coding in the previous life, but it was all by accident. So I've got a business education and I couldn't afford the programmer, so I did the stuff myself. I don't know if I code then in Python because that's the only thing I can do. I'm an open source for about 20 years. If anyone can remember, Zope, the one with the Z, not the one with the SOAP. Yeah, I was part of that community, did a lot of great things that some of them are still running. So we have a content management system at the city of Kastru, that is 15 years old. And a couple of weeks ago, I filed a bug in their bug tracker because they had a broken page and they gave me the German way of saying, it's all great. They said, the admins are not entirely unhappy with the solution. So that's, yeah. I'm with Susie since five to three as a user and around nine.1 as I had to look up those numbers as an employee. So I joined in 2004, January 2004, and that's when we were in the last phases of releasing 9.1. So that was Susan 9.1, not Slas. Slas 9 came in the same year. Now how did all my experience with Sol start? It was one day in the office, some of the guys that are back in the room basically told me, yeah, for Susan manager for our systems management project, we are going with salt. And I was like, what the, why not Chef, why not puppet, why not CF engine, Ansible? I mean, we had been talking about all those tools. Other teams were using them, yeah, but they came up with get another thing. Well, first of all, I was pretty skeptical. And then I realized, great, it's all in Python. Looked at it. And as I said, I'm not really a coder, but to me, it felt easy. So over my summer vacation, I basically took the 2500 pages or so of documentation with me on the iPad and without a computer because I didn't bring a computer to Italy to the beach. I just started reading about stuff, about all those components and so on. And when I came home, I actually started hacking. I didn't start using it because I didn't have like 10 or 20 or 100 machines to manage. But I knew a bit of Python and I wanted to just hack. And this is what this talk is about. And yeah, took this one as a movie reference. I'm running out of Star Wars movie references. So I went with Casablanca Salt. I think this is the beginning of a beautiful friendship. And that's really how my first couple of weeks felt. Yeah, there are some parts that are hard to grab. And actually, the hardest part for me was getting the YAML syntax right because YAML looks easy, but it has a few pitfalls. And it's even worse than in Python. You really have to make sure your indentation levels are right. But going from there, it was really fun. Now, yeah, that's what we are talking about. Not just using it, hacking it. Salt in a nutshell, I think you've heard most of that in previous talks, so I'll keep it really short. It's all about masters, minions, salt grains, salt pillars. Yeah, and I shamelessly just grabbed those from the documentation on the salt page. I should have borrowed those slides to you, Tom. Yeah, the salt mass is, of course, where everything starts. That's the management server. The minions are the devices that are managed. They have a demon running. And it's two things. Minions are not sent to the minion, but the minion listens to the bus and grabs them from there. And one of the things that I'm not sure everybody fully understands is that a lot of the filtering even works on the client. So there's a lot of stuff that's just broadcast to everyone, and the minion can decide, okay, they mean me. This can have a few security-related issues, so you have to be careful what you're actually broadcasting. But it also makes the thing very scalable because you don't have to have an engine server site that determines for 10,000 machines, okay, I will address this one and I'll open an SSH board to that machine and do something with it. You just broadcast it and say, okay, anyone who's called X who has a kernel 2.6 something or 3.0 something, listen, this is for you guys. And then the result, same thing, they're basically broadcast on the event bus. Execution models, you heard about the discussion about, you know, idempotency or not. I think it's two different things. And yeah, some of the modules, you can write them in a way easily that makes them have no side effects or no bad side effects. And a lot of them are about querying. Like I just want to know the disk usage. I want to list the number of users of anything and those are safe to use anyway. For others like creating a user, it's really that fine line. Do you write code that always checks or do you write code in the execution model that just tries to behave like a function, do this now and fail if it's been done before and then have the state system take care of the other logic checking whether it's actually making sense to do those. I think both have their own specific roles. Yeah, state modules and formulas, that's something that I'm still starting to understand better. A state file is really just an individual file that describes part of your state. And then when you package it all up, it's called formulas. That's one of the things we are still working on. How exactly do we pre-package certain things like create formulas for OpenStack or create formulas for setting up a whole SAP cluster or so and how do we distribute them, how do we parameterize them? And what should be in the code and what should be basically data that you push on top of that? Yeah, I think we've heard most of that here already. Execution modules with the state modules is really a different kind of syntax in terms of the grammar. So usually an execution module would have verbs like add, delete, kill, start and the state modules would describe a state like it's present, it's absent, it's installed, it's uninstalled or stopped or dead or whatever. Now another concept that's important and solved is the grains. Grains are, yeah, the idea is grain of salt are just data about systems that are usually generated on the systems. They've been misused a bit for using them as roles. So you would have a grain that has web server. I don't really like that concept but it's been misused even by some of the professional sources of salt because I guess in the early days there was no other way. But I see grains mostly as data coming from the system. And I have a few examples here like the BIOS version, the CPU architecture, the host name, kernel release, kernel release, yeah, number of CPUs, full name of the OS. So stuff that you may need to make decisions. Okay, is this system really a SUSE system? So should I use Zuber? That's what grains help you with. And then you can use that in your code. Pillar data is, okay, pillar data is data that goes the other way, top down, where you want to keep things secure. A very easy example is if you set up a database, you would not want to put the user in the user name and the password for the database user into configuration files that are distributed to all the machines. So you would set up a pillar that keeps those secured and only exposes them at runtime to that system that you want to expose them to. Yeah, then finally there's the concept of a top file is basically the master configuration. Think of it like the index HTML on a web server. That's when you go into that directory, you want to know, okay, where should I go from here? That's where you have this top file that would always be looked at as the default entry point and that describes how things are connected. And you've seen before, I think that wasn't mentioned really often. Before you see those double curly brackets, that's been a very nice concept in SaltKicks in. The rendering those files is not done in a single step, but you can have several renderers one after the other. And that's how Salt really nicely separates problems. So if you have the problem of writing configuration in simple format, they use YAML. Now if you need loops, let's say you want to auto create a list of servers based on some data from your load balance or whatever. Those looping, yeah, you could do it like others do it, okay, extend your DSL and introduce loops or introduce if then else or case or whatever. No, there's an existing framework for that, Django 2, that's very established in the Django community for templating that kind of thing, injecting code like loops, like just variable replacement or so into templates. And they've just reused that and you run those renderers one after the other. If you don't change anything, it would just happen automatically. So first, the YAML is parsed, no, actually the other way around. So, it's, Ginger does its job and expands all the stuff and then you've got basically an expanded YAML file that's then parsed in. Yeah, but we wanted to actually talk about hacking it, not using it. First of all, I guess most of you already know, if you want to get Salt on open Suze, it's really just a super in a bay. We have a stable version on open Suze, both Tumbleweed and Leap. We will also soon have it on Sles in the Advanced Systems Management module. And if you want a more bleeding edge version, I've got the URL on the slides and those will be uploaded to SlideShare or so, I think. We have this project called Systems Management Saltstack where we have pretty recent versions. So currently, we have the 2016-3 there already. One thing that Owen mentioned already, it's really easy if you want to start experimenting with things. You don't have to create your own packages, clone the whole Salt project tree or whatever. What you can do is really use those underscore directories and put things in there that are kind of local overlays. You play with them. You can also use that, of course, for stuff that you will never open source for some reason. But if you want to open source it, if you want to develop upstream, it's always a good idea to start first with playing around there. And actually, Duncan has created a project with Salt and Snapper that I'll refer to later on, and that has a very nice example for this kind of work style, even with a vagrant set up to use right out of the box. And then you can sync those things. So the sync underscore grains command only syncs the grains, but you also saw on the other slide from Owen that there is a more generic one that syncs everything. And those directories exist for anything, all the components in Salt. So from beacons to engines to grains, modules, proxies, renderers, whatever, it all works the same way. Now my very first experiment was writing an old grain. The idea here is, OK, I want a piece of information from the system, and the standard grains just don't do it for me. So I need to grab something. This is a really simple example. All those examples fit on a page. They don't have any error handling, logging, anything. So don't just use them, but use them as an inspiration. In this case, for example, we are just using Python to run the which command and check whether Zuber is installed. And if it is, it will return true plus the path. If not, it will return an empty path and false. Yeah. And it's as easy as that. So you can just call something on the command line, probe for a file. You could use anything there. If you have an existing shell script or tool that knows how to figure things out, or you could, of course, from Python call any libraries that you have, you could use, of course, you could just read the proc file system or so, and return. And the nice thing is that basically all you have to make sure is that you return a dictionary, a Python dictionary, you build this dictionary with the data that you want to expose. That was my very first experiment. This particular one actually fails on the SUSE, just enough operating system images because we are not installing which we are using the built in which on the bash. And I just haven't found the time yet to figure out what exactly I would have to change because on the command line, it would just run. Yeah. Change your commands and type dash B as a bash. Yeah. That's basically it, I guess. Yeah. I've just not bothered too much about it because I mean, this is, it works if you have which as a command installed. It doesn't work if you haven't. Yeah. Now execution models, modules, that's the next thing. And again, same concept. You can use those underscore modules. Salt and snapper was what I tried first. My code is nowhere close to what Duncan and Pablo did during the workshop we had a couple of weeks ago in that same building here for the SUSE cloud and management team. But I got it running. And the reason for that was mainly because snapper is such a great tool. So first of all, snapper has done all the abstraction of how do you handle snapshotting systems from a command line really well. It's well documented. It has debas bindings built in and then I mean debas bindings from Python are just an import debas away. That's really easy to use. And it even comes with Python examples. I've given you the project URL from GitHub. So you just go and copy and paste the examples and you get it running. So this is basically all you need to do to write an execution model. Of course, this one only does one thing. It lists snapshots. And in this particular simple example, actually I just return the unparsed output from the list snapshots debas called directly because it's already returning a dictionary like structure that would automatically be mangled. And of course, it's not nice. You may want to filter it. You want to make sure that instead of just positional information, you give it nice names or so. And if you just want to get it running, pass the data, pass it to the server and then do something with it, it's as easy as that. And that's probably true for any other commands that you could run over debas. If there's a namespace, that's the boilerplate code you need. And of course, you need to put in some logging. You need to put some error handling in there and so on. And then it's probably a bit longer. But that's really great stuff. Yeah, the pros do it slightly more advanced. And there's a blog post from Duncan about it. And there's also a GitHub project. This is not integrated into upstream sold yet because we're going to use this conference to kind of work on the design together with Thomas because we are not quite sure yet whether we are getting everything right conceptually. But the cool idea behind that is we are not only using an execution module that exposes the snapper API to a remote engine. We are using it for states. So you can have a state that says make this machine look like this snapshot because it would just always make sure before you do anything else, it applies this snapshot. You cannot by snapshotting the system or rolling back a snapshot, but by taking a snapshot and copying over all the files from that snapshot to the system that have changed. And you can also exclude files or directories that are not relevant for your configuration management like your data, basically, or your logs. Yeah, another thing that I worked on and if we have time left, I've set up the demo. It is the so-called proxy minions. I mean, to be really honest, they are a bit oversold because most of the problems that you can solve are proxy minions you could solve before, but they are a nice way of giving yet another abstraction. The idea behind a proxy minion is if you have a device where you can't run a minion because it may not run Python or you can't control it, you only have a login into, let's say, a REST API or any kind of, maybe you have a command line tool that you can use to communicate with that tool. You basically write a proxy minion that talks that API and exposes itself to your Sorg Master as a proxy for those systems. There are existing implementations for that, for HPE1View, for example, or for some blade center management controllers, some switches. What we did is make it happen for those Philips light bulbs. Most of the heavy lifting for that was done by Bo, who is also in the room today. So if you have time, I can give you a little demo on that one later. I've got yet another nice example for something really different. And I think that shows the real power of how Salt is so flexible, but at the same time has same defaults. So you can run it out of the box with very little configuration, just following the documentation. And it's really just bringing up a minion and a server. It's probably five steps altogether once you've installed the software. But everything is just modular. You can change the way every single component works by overriding it, by replacing it. Now, what you've seen in Tom's presentation here on stage is that basically anything that you do when the states are created and rendered, it's all ending up in a big Python data structure. So you have this high state that's then compiled into the low state, and that's basically the input for the state engine. That also means that you can do that in a different way. You don't have to go from YAML, you know, expanding the YAML using Ginger 2. You can, there's an existing PyObjects renderer, for example, where you can use simplified Python or you can use plain Python. Now what I tried is we have this project called machinery. That was written by a completely different team at SUSE. All the backend implementation is in Ruby. But the output of machinery is a JSON file. It's basically two things. The JSON file that describes in detail what's going on in the machine. So what users, packages installed, services running, all that stuff. And if you run it in full mode, it would also create tar balls with all the stuff that is not described well by just text. So if you have files that are not part of an RPM, yeah, you can tell it, okay, package all the crap up, and so I have overlay tar balls. I'm now talking about this JSON file. And I thought it should be easy to take that JSON file and use it as input and basically write a renderer directly. And I succeeded to some extent. So that's really experimental. It was like an hour of work. So don't get me started about coding quality or anything. And it's again, no logging, no nothing. But it was as easy as, again, with the power of Python, with its batteries included, of course, like there is a debus module, there's a JSON module that you can just use. You don't have to look for it or so. It's just there, import JSON. Now what you can do is you basically load your data and you will be able to go through that JSON data tree. And in that case, I'm just filtering for users and I create an output tree that has the dictionary with the user present directives. And of course, this is again oversimplifying because there's more data. Like there's of course, you know, user UID, group ID and all this stuff. I just completely omitted that. And same for packages. I mean, this code will actually work. It will recreate all the users that are in the JSON file and it will reinstall all the packages. What it doesn't do, and that's really where I need more interaction with the SaltStack team. In those cases where we have to figure out dependencies, because our JSON file from a machinery doesn't really take care of dependencies. Like should I install that user first and then I can install the packages? Or can I do it the other way around? Or do I have something else that would take care of it? Like when you build images with our QV, image building chain, QV will take care of those dependencies. So I just pass it an XML file. But if I'm in the engine here, I mean, unless I specify those dependencies manually, which I could do because at that point I can write my own high state however I want. Yeah, but I have to put the logic somewhere. Yeah. There's more stuff that other SUSE people are working on. In the keynote I already mentioned, we are maintaining Java API bindings for Salt. The API is helping us with really keeping concerns separate, with not interfering with the Salt engine too much from SUSE manager, keeping it all separate. Because the event mechanism is very strong in providing us a lot of data from the machines. We're basically just using the Java API to listen to a lot of the stuff that's going on the event bus and creating database entries from there. Like if we want to inspect the machine, collect all the software and hardware inventory, it's basically working that way. The other project, I kind of hinted it already, now Kiwi used to be a project written in Perl. The newest versions of Kiwi are Python 3, which is a bit of a problem. So Bo looked at it in the last, in that same workshop, we succeeded in doing the Salt snapper integration and he ported it back to Python 2 as well. So it will just work out of the box in the same Python that current Salt versions are using. Now you can play with integrating Salt with Kiwi. So that gives you a full potential tool chain where you can go from inspecting systems to not only configuring systems but also creating images for the parts that you just want to dump into your binary and use as a baseline for running your stuff. So that was a really quick run through what you can do with extending Salt. My main motivation really is to tell anybody who has some Python knowledge or is just the basic tutorial away from acquiring that Python knowledge. It's very easy to start working with Salt, extending Salt in almost every aspect. So if you want to write your own language to describe states, you can. You can use JSON, you can use whatever XML format you want, you come up with. I'm not suggesting that you should do that but if you have some existing tool and you want to try to basically make sense of an existing description from another tool, you could do things like that. You could write your own modules, state modules, execution modules very easily. What I've kind of skipped also on the output side, so all the data that comes back from the Salt minions. There are lots of existing projects and ideas that we have around using tools like LogStash, elastic searchers, where you basically just use the data that's coming back from the minions and put them into some no SQL database or SQL database or lock management facility, do some filtering there and so on. So the possibilities are really endless. Yeah, I have got a question slide here. If you have any questions, that's the point for it. If not, I could show you some of the lamb stuff just because it's fun. Question, yeah. Can you show the lamb? Yeah, I will show the lamb and I hope it's all going to work. So this is something that we originally did for a demo at – here we go – at Suzycon in Amsterdam and then later I did a similar version with help from Don Westberg from the states and Johannes Renner from the Nuremberg Suzymanager team at the Saltcon and Salt Lake City. For that one, we didn't only have three lamps, we had four lamp posts with three lamps each and a few backups. So that was a much bigger show, but yeah, I can give you a little demo here. I'll just move my shell to the other screen. Okay, here we go. Yeah, clear. So just to explain what's going on in the background, I have a machine running that is basically a virtual machine that talks to the salt master as if it was a salt minion, but in reality it impersonates all those lamps through that little thingy here that has a REST API that I can call. And it's connected through the network. To start with, I wrote my own API code that really just was for the demo, so that's not part of Bose code that he wrote. I just hacked it into it and I didn't do the underscore thing, so that was just directly in the code because it had to be quick. This one, and I'll show you the code in a minute, is basically randomizing a color and assigning a color to those randomly and then sending what they call the alert command. So when I run it again, it should start up with a different color per lamp. Yeah, it does. So it rolls through all the colors, hundreds of thousands of colors, and most of them suck, to be honest. So that's the fun thing. We were also using that for actually displaying state, so what's going on on systems. And I can show you some of that code if you are interested. So first of all, let me check where I have the... Yeah, so I hope that's kind of readable. The have fun part is really just this little method here. Actually, the heavy lifting starts here. The way that Hue Lamp API works is basically you pass a JSON structure and you can do things like Lamp on and then Hue is the color saturation, status consideration, and then I use the alert mode. And you basically iterate over all the lamps that you have, and there's an API call again to query the system. Actually if you just want to play with it, there's also something that isn't in any way related to Sol, but that's cool. I'll show you in a minute. There is a crawl plugin for it as well. So I can also... That's what we do if you do a demo and something doesn't work out, we can just fake things, because I can just go in here. And that's basically a piece of JavaScript plugin that uses the same API. You can change the color, you know, and so on. The free version doesn't let you group lamps or so, but just show them. For simple demos, that's really cool. Okay. Good. So what we did there is when you look at... So that's the manager server, right? I have to type... They go to my configuration, servers, reactor. Here we go. Yeah. So that's just a few examples. The reactor mechanism can be configured by just putting those configuration files that are also written in YAML into the system, and just bringing up one media, one start, for example. So that one... Yeah. This one, basically, it's triggered by an event, and it will then use the hue color call, set a color of blue, and in this case, it's going after lamp number three. Yeah. Yeah. That was the part that was a bit hard with the demo, because if you have more lamps, you'll have to get all the numbers right, and you'll have to have files for all that. And it's a bit of a pain if you have to re-number them and reassign them to the thing. There's no easy way. You will basically... If you mess it up after lamp 14, you'll have to start again, because otherwise, they are not in order. Yeah. And then, basically, from the Susan Manager code, we would send events, and I can give you an example. I think my Susan Manager should be up and running. But some luck, it's actually going to work, even. So in Susan Manager, we have those salt states in the state catalog, and there are a few like the alert state. So this one is directly running the hue alert command on lamps. I think I've assigned it to one of the systems, or if not, we can just do that. That's not how it's supposed to be used, but it'll hopefully work. So now you can assign that state. I'm not sure if it's going to work. But one of the other states worked, I remember. I go to my... This one wasn't heavily tested, especially not with those lamps, but I think if I just reapply the year state. So that's what I've been talking about in the keynote. We are not actually, in most cases, linking salt states directly to systems, but we try to always go through those system groups, because that way you're completely separating concerns, and then Edmund could just make the connection without having to know about the salt parts. And the guy who's writing the salt states doesn't have to know exactly which systems those states are supposed to be running on. Let me see if that one does anything. I had some effects running yesterday. Yeah, anyway. All lamps. I'll try the other one. That's one more. Okay, that one doesn't work. Okay. Another thing that may work is if I just bring up the... Oops. Let's just bring up... Yeah, okay. That's an easy one. Of course, the Media1 server doesn't run. So of course, if the VM isn't running, it's not going to do anything. So let's just wait for it, and maybe when they start it up, it looks better. But yeah, long story, you can do things like when the machine comes up, you fire up an event. I now have a Raspberry with a sense head that has an 8x8, so 64 RGB lamp matrix. So I could actually, with a Raspberry, I could do things similar to that with 64 virtual servers and have it all. Yeah, here we go. So that's the load balancer coming up, first lamp. And in theory, if everything works according to plan, the other lamps would also come up. Yeah, you see. So yeah, you see, it's working. It's just operator error. Of course, if the VMs are down, so I will not talk to them. Okay. So much for now. I hope that inspired you. As I said, it's not limited to Hue lamps. You can do things with Raspberry Pi, for example. On the Raspberry, there is Python, so you can run it natively. I think Duncan has a cluster of Raspberry now running. Salt, right? Yeah, with Salt as his agent. Another thing that I really like, and that's really if the Ansible guys talk to you about, well, but Salt, they need that agent, and it's pretty big and so on. First of all, we are playing with the ideas to bring down the Python footprint because most of the stuff that adds to the Python footprint is just overhead because you have a source file and you have a compiled file and you have a lot of libraries that you'd never use, all the documentation. You can strip most of that. And you can, well, that's when the states come up. And the cool thing is, Salt actually waited until those machines are up. And for some reason, that should work. So when it gets green, that's when the state is applied. Yeah. So, with Salt as his agent, there's a simple mode where you can even execute just plain shell commands if there's nothing on the system but an SSH team and, of course, a shell. Yeah, you need some bash. I'm not sure about busybox. I've never tried that. What you can do is even basically go in and with just plain commands like RPM commands or a curl or W get or whatever, you can bootstrap that system to bring up the necessary, like just install Python. Now, once Python is installed, you can use Salt as his agent and you can use all these execution modules or state modules. Because Salt will then basically temporarily move all the modules that it needs onto the machine, use the local Python interpreter to run it and then either clean up or cache it for later use. So you can go from zero footprint to just a little footprint because you need the Python interpreter to the minion running as a demon all the time. And that's really a great combination, I think. Yeah, so you can bootstrap Salt with Salt. Not many tools can say that. Okay, thank you. Any more questions you can ask now or you can just grab me or send me an email. My email is really easy. I'm Joe at Suzecom. My Twitter handle is Joe Suzecom. My Google email address is Joe Suzecom at Gmail.com and I think I even have that for iCloud. No, I'm not sure about that one. Yeah, okay, thank you and have fun with Salt.
After the SUSE Manager team had chosen Salt as the future engine behind SUSE Manager, Joachim "Joe" Werner, the product manager for SUSE Manager, spent some time to learn the project hands-on. This is a very personal report about that experience, from extending Salt with code to manage WIFI-controlled LED lamps for a demo at the SUSECon conference to first experiments with writing a Salt module for Snapper. This talk is for you if you don't just want to know about using Salt for configuration management, but are more interested in contributing to it or hacking it for your own needs. Basic Python skills recommended to make sense of the example code. ;-)
10.5446/54577 (DOI)
Okay, so now okay, so welcome everyone to my presentation about Cheetah. So I expect you come here because Cheetah is so nice animal. It's really fast and that's usually what you expect from running other binaries from your language because you don't want much overhead and fast also in other meanings. You expect that it will be fast to write it. It will be also fast when you read it to recognize what it does. So it's also easy to read, easy to write and don't forget Cheetah is quite a danger animal and also running binaries can be quite a danger especially if you are passing some parameters and so on. So you should be careful and we also have this in mind when we start developing this Ruby gem. By the way, authors of this gem are David Maida and myself and it's now I think three or four years old but now we add some more new features which I hope you find interesting. So what we will talk about at first I will show existing solutions in Ruby because we are open source developers so we like a corporate. So if there is existing solution that fits or just need a few adaptations it's better to use existing one and cooperate with someone else who helps us to maintain it and also to use it, to document it and we like to communicate. So I will compare existing solution and also explain why we don't choose it because it's usually it's design doesn't fit our needs and that's hard to adapt design of other tools. Then I will show you some features of Cheetah and last but not least I will show you the last changes done in last half of a year. It's mainly result of adapting Cheetah to be also used in the US. So currently there's some places in the US that already use Cheetah for running binaries. So what's existing solutions? I will start from most famous one in Ruby is using Bektix. These are easy to use as you can see you just place command in Bektix it's shell expanded so you can do any fancy stuff here like directing outputs, piping and such stuff. What's not so nice? It's not secure. The majority of security problems in Rails in running some scripts is from this Bektix because you just pass, if you see the second example and you just pass argument then here can be anything like just send pipe or some end command and then remove everything. So if you do this on your web pages there is a big risk it can fail. You need to manually handle it somehow. There's some libraries for it like shell escape that gets string and then escape it. But even single place where you forget to escape it is your security problem. So for us this is not secure by default. And second stuff we don't like much is checking errors. Errors is checked by global variable question mark and then you need to check what's exit status of this code and then react on it. And again if you forget to check it it can happen that you just ignore some errors and later you have bigger problems. So we prefer to fail quickly if something goes wrong. So another possible call is using system. System is more secure because it doesn't interpret string as shell command. It just send these parameters to exact call. So if you write system get something then each parameter is just only one parameter. So if you pass a parameter from your user it's always just one parameter. No shell expansion no more problems. But as you can see the result is if command run correctly. But if you need to get your standard output or error output you have to play with default streams that's attached to command which is not so nice because you need to close existing ones, open new ones that ideally to some string stream capture it. So it's not easy to use and definitely not fast to write it correctly. So that's also why we don't like it so much. And also because it doesn't use exceptions you can simply overlook that command failed. There's something more Linux specific which is popen3 and there's also popen4 as gems so it's not part of I think popen3 is part of Ruby standard library and popen4 not. But as you can see syntax it's quite tricky. It opens some block and it pass a lot of streams. It also pass some thread object that holds its exit status for example after finishing of this command it can allow some waiting, it allows some more interactive. It's and what problem is it's quite low level. It's really quite one to one to Linux system call. So it's not so easy to use. One nice feature that it have it can pass environment variable. It's quite useful if you programically run some script or some binary that if you have different local like you have Chinese user so we have Chinese local is loaded everything is shown in Chinese and you want to pass some output of command. And if you get Chinese I don't believe your program handled nicely. So it's nice that you can pass and I want to run it with standard locals maybe also some secure one like do not use display or use this display and so on. So it's quite nice that you can pass this environment and it's changed only for this call. But as I said it's usage is not so easy to do it and it's a lot of code. Yeah and another nice library it's called cocaine. You may be find this name familiar if you are drug users. So it's created by two bots which is one of companies that are quite famous in orbit world. It's objects which is quite different to other calls that other other libraries is just some calls cocaine is object oriented. You create some command you can run it. It allows some parameters passing which means you construct some command and set OK and these two parameters are passed by user of this object. So you use run and pass and I want these two parameters to be replaced. It is exceptions which is nice. So if something fails you get exception and it force you to handle error states. If you don't do it then you need to have some global handler and it shows something goes wrong. So you quickly catch any problems. It looks quite promising for us but problem is that they have different focus. They have they really focused on reusable commands that you have a command and then running multiple times with different parameters and somehow capture objects and so on. And also it's quite controversial to edit to our to add this gem to our enterprise distribution. And then there are some well jokes. So in the end we decide we would like to have do it our way. And now let's show it if you think we are succeed. So for simple use cases it's simple to use. As you see the first line it's basically same as system call. So you have each parameter is one argument. So no dependency injections and it's simple get something. So even someone who don't know cheetah can recognize what it probably do. And if you need output from this command you just pass in second example is capture standard output and it's returned. So the return value is standard output. It also supports streams. So if you have for example very big file and you need to process it by partial parts then you can read just few bytes process it and then continue. So it's you can see here we open some file and then let some command. Do something with it. In this case it's standard we write standard output to this file. So it's like capturing output in shell. You can also use it for standard input and error output. So if some command process by parts you can pass for example some network stream or whatever streams you have in mind can be used for this stuff. It also support pipes which is except backticks not other libraries support it. And again we try to have quiet familiar syntax and easy to recognize. So probably even if you don't know much cheetah you can recognize that probably it runs some cat command and then grab result for some keyword. So even I hope it's intuitive enough to recognize what it do. And at the end we capture the result. Yeah, good. So I repeat a question what happens to standard error. By default we have a logging which I will talk about next slide. So it's locked standard error but if you don't want it to capture it's not returned. And if you want if you would like to have it you can just write as another parameter standard error capture and then it return a tuple just two elements. So you write result, error equal cheetah run and it will return you error output. So if you would like to have it you can get it. And by default we lock the outputs for it. Yeah. And as I already mentioned there's some logging. So you see by default it use Ruby logger interface so you can register your own logger that you would like to get messages from cheetah and cheetah then write I run this command it return this standard output, error output, this exit codes and so on. And of course we use exceptions because we think it's much better to have it. It returns cheetah execution failed if command output is nonzero. And I will talk later that for some commands it's not the perfect behavior so we also slightly adapt it to fit needs. But by default if you don't specify they do expect any error codes. You will see the exception that unexpected error code happens and something goes wrong and this exception already contains standard error output. So you see what the program writes to error output. So that's features we have from beginning in cheetah and now what we also modified to be easier to use it in Jast because Jast has some specific needs. So few stuff. At first you see as I mentioned there is some binaries that have expected error output exit status. Like if you grab something then one means that the search pattern is not found and it happens quite often and usually it's not error. It's just okay. We don't find it. So you can specify allow exit status one and then it automatically adds to the return stuff. What code it returns. So as you see in example code it's just returned the code because we only want exit status. But if you want also standard output on standard error then the exit status is at last position. And as you can see you can pass or some more than just one integer can pass range. You can pass any array. You can pass almost anything that responds to include method. And cheetah just asked is this exit status in this allowed one and if it's include then it's fine. Okay. So it's expected just return it and if it's not then it's a very exception. Another feature is passing environment variable because as I said for example in the app we use it very often because it's localized and we need to get some parsable output. So no Chinese, no Turkish or Spanish. Just common English. And last but not least we allow to run commands in change to root. It's very important for REST because during installation you mount your target system and want to do there some stuff like regenerate init rd and such stuff. So you and you need to run it in that different root. So you ensure that it run installed in it, make init rd and not in this one and so on. And in the end because cheetah already use some forking and side stuff doing, rooting is very, very simple. It just one call in Ruby. The only drawback is that you have to be root to use this feature. So, but it's by default the permissions on Linux. So if you want to know more, there's more examples, more features that I don't mention here. You can go to cheetah project with lips on GitHub under open source umbrella. It's maintained, it's available currently in Tumblr. I think leap 42.2 is also I think 42.1 doesn't have it. But I'm not sure. So but for newer distributions, it's available and you can freely use it. Okay, so do you have any questions to this gem? Okay. There's a microphone. Does it work? Yes. This gem is also available in the Ruby distribution package manager, right? Yes, it's available on rubygems.org. You can install it anywhere. It doesn't have any dependencies that's related to just open source. It works on any Linux distribution. It, what where it doesn't work is Windows because Windows doesn't have something like pipes and so on. We use some lower level Linux stuff. And also I don't try it on any other unixes because it do some calls that are part of Unix standards, but I worry there's also some stuff that's only Linux specific. So I don't try it on something like IX or like. Okay, follow up question. Do you have a preferred way of getting cheetah if it Ruby gem or the zipper open Susie? Are they equivalent? They are basically equivalent in, it depends of course. There you need. It's usually very up to date because as I am upstream and also materially in open to say when I release new version, I also release new, I edit to build service. So it's there. Advantage of RPM is that it offers you when new version appears that you can do it, but of course you can do it in the gem by gem update. So they are quite equivalent. Also we have nice feature in open to say that when we pack Ruby gems, we keep the gem data. So if you then use, if you install other gem that depends on cheetah and if you cheetah is installed via RPM. The gem packages still see the cheetah because when you use RPM, it also registered into gem database on your PC. So basically I recommend to use RPMs because it's easier to see it and to manage it in one tool and also gems see gems installed via RPM, but not with server. If you install something via gem, the RPM doesn't know that you install it via gem. Okay. More questions? Okay. So I have one question. Who I convinced to try this gem? Hands up? Nice. Six new users of cheetah. So thanks for your attention and you can contact us as I am part of YAS team. The easiest way is to contact the whole YAS team on the free note on our mailing list. And at the end I would like to thanks Richard Brown for creating these nice templates because my usual slides are just white ones. So thanks to him and thanks to you. Thank you.
Cheetah is fast and secure native way to execute scripts and programs in Ruby. It includes native support for pipeing, streaming input/outputs, mandatory error handling and running in chroot. The session will contain live examples of usage and comparison to native ruby methods like backticks or system call.
10.5446/54578 (DOI)
Great. So welcome to presentation about CFA, which means config files API. It's an API that mainly used for some editing of configuration file. By editing, I mean fine small changes in this file that do not break existing stuff. So it's not something like a sound or CF engine or such stuff that owns whole file and modifies just deploy new version. It's API for the small changes like you expect, for example, from YAST, that it doesn't break your existing stuff, doesn't break your own comments and such stuff. So what's the content? I will explain why we create a new API, why old one is not enough. Then I explain design of this API, how it works together. And then I show some real time or real life examples of usage because on Tumbleweed it's already used. So I will show some code that you use in Tumbleweed. So why? It's obvious reason to many face palms when you use or see old code. This usually begins motivation. If the old code works somehow and you can somehow use it, then you do it. But if it starts to be very annoying, then you have motivation to do it better. So currently I will mostly talk about YAST because YAST even now still use some old API for this changes. It's called SCR. I even don't know what it actually means. It's very old acronym. And what's currently problem? It's not much reusable. It can be used only in YAST and even if some projects have some motivation to reuse it, in the end they just give up because it's too tight with the rest of YAST. It requires a lot of libraries from YAST. Also last point is the reason that it is used for communication YCP which is now very legacy and almost that language. It's also monolithic. So if you want to use just part of it, like its ability for parsing files and actually you don't want to read it or write it from current system, then you have a problem. Also it's completely written by YAST team and maintained by them. So every piece is done in-house, which is always not so good. We would like to share our workflow and also share maintenance of some parts with other interested teams. Yeah, its API is very strange and it's related that its API is confusing. And we see it again and again in YAST when someone new comes to the team. The SCR is the most scary part of YAST. It's hard to explain it for newcomers. It's hard to use it for them. And it's quite hard to also explain how it works because it has a lot of abstractions that are not needed. That makes sense 15 years ago when it started this project, but now no longer makes sense. And it's hard to explain something that doesn't make sense to anyone. So what's a requirement for a new API that we hope will replace SCR and will be easier to use? So of course the first stuff is easy to use. If you have something that's hard to use, no one will use it or try to find some shortcuts how to avoid complex stuff. It has to be modular that if in future something useful appears in open source community, we would like to replace for example some stuff that doesn't work well with something that's already existing or will exist in future and works better. So we would like to have replaceable parts. And also we would like to have our own parts that can be used at other places. So some modules written for CFA, we would like to reuse elsewhere also. Yeah. Of course requirements now is easy to test because what's not tested is usually broken. And of course it doesn't mean that if you have 100% test coverage that it's not broken but there's a very small chance it really decrease the number of bugs. Of course it should be object oriented because SCR is from YCP times so it's functional based but we would like to use object because currently it's in Ruby so we would like object API. And of course the most important part is friendly to newcomers. So if someone would like to start with YAST or would like to contribute to YAST then don't have a strange part in its code which is hard to understand. So now let's look on design of CFA. It basically has three components, the basic one is read a writer, it has a single responsibility, translate string to some target storage or string or whatever it is, just write string or read that string from target. Then you have parser, so this string is then somehow interpret to some tree or vice versa, you have modified three and want to translate it back to string that can be read or write. And then there is a model which is something like high level API for given configuration file, usually model have some actions for it, some operations and that operations work on a parsed tree. And that small line between a reader and model is dependence injection. So what basically user use is they have model and then can pass different reader and writer. So if you have, for example, file that's not on your system but somewhere over network you can write your own reader writer for that network file and pass it to model. So in the end the stuff was read and write from this reader. So now let me look closer to this. As I said, the writer is very simple interface. It allows read and write just two methods. Both methods get path as its argument. The path is relative to root. So if you want to read your, for example, grab to default file, you just pass I want read file that's path is ETC default grab. It works only with plain strings. So no formatting, no stripping of this string or any other operation. It's really just get me string. And examples we already have is common file in Ruby, which means read from this path, write to this path. Then we have memory file, which is very useful for testing. That string is hold only in memory. And you construct it in code. How a file should look like and then work on it and then compare if memory looks after some modification as it should look like. And then in YAST we create also our own specific that's not part of CFA, a target file that's used during installation that they recognize if you need to write to target system. That's where we install stuff. Of course, if someone else needs his own, the writer, for example, reading, writing over SSH, then just create your own class that implements these two methods. And it's enough. The second part is parser. It translates between string and parse. Each tree is parser specific. I'm also considering create some generic parse tree, but in the end I found it's too much abstraction and it can lose some features of given parser. So currently it's specific parse tree. It's quite low level tree. For example, in AUGAS it holds all commands. It doesn't know any relations. It just is positions and can recognize some basic syntax. And as I said, examples is AUGAS, which we use mainly for parsing and serialization. And another possible one is line parser that is for simple config files that have each line one option. So it's very, very simple parser. And of course, you can use much more. It's just currently used ones. And as you can see, the hardest part, which is parsing, we currently delegate to AUGAS that already have its own lenses that specify the syntax of trees. And there's many available lenses. So we don't spend time writing open parser for new files. Yeah, and model is high level API. It allows higher operations like enable something, some operation, or remove something. It's coupled with parser. So it knows it's parser and it's usually depend on it. So there is no option to switch to different parser for given model. But you can pass your own reader, as I mentioned before. Also models currently allows to switch globally reader. So if you have more models, you can in one place switch that all models use different reader, like a network one or installation one. Also models ensure for consistency. So if there is a config file that have, for example, two colliding options, then the model should ensure that it's properly used. The model is something for target user, so as I mentioned, it should be high level and easy to use for them. And examples are some graph models, which I will show in a few minutes. So now, more interesting part is examples. So let me show. Open it. So currently this is a plugin for CFA for Grap2. So the idea is that each software have its own plugin that handles all its files. Currently for Grap2 we have four configuration files, the views. And for example, the easy one is device map, which is some, it's mapping between kernel devices and Grap2 devices. And how it looks like, it's a model. It has its own parser and it use August. So you just write I want August and use this specific lens for it. It has its own path. So it's also constant where this file is. And as you can see, for example, we ensure consistency. Grap2 or basically, yeah, font. So let me enlarge the font. Okay, I hope how we can enlarge this one. Okay. So as you can see in SAFE, we ensure that we use at maximum eight devices because it's some limitation of Grap. It doesn't allow more. It's caused by some hardware limitations. So there is some consistency check. There is some helpers like get me Grap device for given kernel device. Get me system device for given Grap device. So it's really quite high level API that hides some details like some ordering and such stuff. It allows editing, removing, and also allows you to give some Grap devices, which as you can see filter out comments. So you just get the real stuff. And this is a simple model. For a more complex one, it's even more significant how it can make work easier. It's default one. It's usually the most interesting config file for Grap2. So there is a set of simple attributes, which basically mean that it creates a Ruby Assessor called default for key Grap default. And that's a common string once. So it's just defined. There is such options and you can read it. You can write it. And then comes a more tricky ones like kernel parameters. And kernel parameters have some data, but it looks like string kernel parameters. But in fact, it's more complex because it's a command line stuff, which is its own internal logic. And for example, you are interested if your default kernel get given parameter. So the model define its own parameters and then you can quote, like it's in documentation, there is parameter quiet. And if you get through, it means there is such parameter. If you get through, it means there is no such parameter. If you get straight, it means there is a single instance of this parameter with given value. And to be even more interesting, kernel allows more specify a parameter multiple time. So if there is, for example, if you use two serial consoles, then you can ask for it and you will get it. So this really makes working with configuration file much, much easier for target user. Because you have some helpers that ensure everything is properly voted, properly read it, placed properly. Also another example is if you have configuration option that have just few possible values. Of course, you can let user to add anything they want, but there is a risk that the graph doesn't recognize such option. So good model should mention what's possible values. And then you can, if you set it, then again validate that it used the proper one. Okay, and now let's check how it actually works. So now I show you a YAST code that use this device map. And also shows that there is, of course, much more logic above CFA. Because for example, in YAST, the idea is that we use UDF devices. And we want to use it in device map. But CFA itself doesn't know anything about UDF. So it's again layer above CFA. So if you want to check if device map contains given disk, you need to check that it contain given disk translated to UDF device. But if you do some proposing, just do some filling ordering, you can just ask, give me all this with HD prefix sorted by BIOS order inside stuff. And now let me show a bit about CFA itself. This model has one interesting feature I would like to show you is how it set value. Which is, sorry. Yeah. So how it set value is that basically it tries to modify if value already exists. And if in configuration file this value is not yet defined, then it try to uncomment it. Because usually what happens in default configuration files, it, they do have some comments. And below is, if you uncommented this line, it happened this. So we basically replace this commented out option and use it. And only as the last option is that we add this new line to the end of a file. So it try to behave quite smartly when it try to modify something. Yeah. And here is example invitation of memory file. As you see, it's very simple. Simple stuff. Just read write and hold it into internal memory as content. Okay. So this is example how it looks like. And do you have any questions regarding this API or its possible usage? Okay. It looks like not. So thanks for your attention and enjoy this evening.
YaST has been trying to find a solution to work with configuration files in a way that is easy and reusable, while ensuring the consistency of the resulting configuration. The response is Config Files API (CFA), a generic framework to work with configuration files in Ruby. Although currently is only used in the yast2-bootloader module, CFA will become one of the key components of YaST in the near future. Its design and foundation look beyond YaST, making it a useful resource in any environment needing programatic and semantic management of configuration files. The talk will provide an overall overview of CFA's architecture and down to earth examples on how CFA can be used and extended.
10.5446/54579 (DOI)
Good afternoon, everyone. My T-shirt is lying. I'm actually straight out of Vienna right now. We got in here like half an hour ago or so because we had our collab Taster event there yesterday. So I'm still halfway on the road with the brain, but it's all good. We're going to be good. I'm going to talk to you about shaping the future through collaboration because the primary role that I feel today is as the CEO of Collab Systems, a pure free software enterprise. In fact, what we do, I hope all of you already know, is collab, which does all these wonderful things from your email, of course. I mean, email keeps on dying, yet growing from year to year. So I think it's not going to go away anytime soon. So we do the whole range of calendaring, tasks, notes, email, contact, sharing. It runs wonderfully on OpenZooza, by the way. So Hans Derrath, who was here a second ago who just helped me set things up, has been working with the OpenZooza community for years. And so if you want to install it on OpenZooza, that's really, really easy. But I'm not going to talk so much about what Collab is or what it does. In fact, I think you can all find that out for yourself. What I want to talk about more is actually the why. Why does it matter? I mean, what is driving us as a group, as a company, as a team? There's quite a few of us around here, since we're having the call-up summit again in parallel to the OpenZooza conference. We did that last year in the Hague together. We all felt it was a great success because of the way in which people could switch between conferences. And so you have quite a few of the Collabians here. And we, at some point, set together trying to find out why it is that we do what we do. Like what is it that drives us forward as a team, as a company, as a solution? What are the important things for us to get up in the morning? And so we did what everyone does. We did a little retreat in the Bernese Alps. So that's an original picture from that retreat, in fact. And we tried to put our heads together. And strangely enough, when we wrote down what mattered to us, it very much came down to the same principles, and the word freedom ended up on there multiple times. So we condensed it a bit. And for us, the reason why we do what we do is that we want to provide freedom and choice through technology. It is the culture of the company, but also the culture we want to inspire into society. And we want to deliver that freedom through collaboration, because for us, collaboration, and the ability to work together is actually what binds human society together. It is what really, really drives this. So this is basically taken straight from the heart of what free software means, of what open source is. It's what drives our community. And in fact, for us, that is exactly what we put at the heart of our business. And that's why we don't do any open core. We've never done any proprietary bullshit. We do straight free software only. Have always done that. Now the problem is that when you do this, right, when you do things that matter to you, sometimes you think people need to get that straight away. They need to understand why it is that you do what you do and why that matters and why it should also matter to them. You think that they should use it just because it's open, but just because it's free. And we have as a community had a long, long history of trying to just say, look, of course, I mean, it may be a little bit worse, but you should still use it because it's open. It gives you freedom. And that is true. It is however, if we are fully honest with ourselves, very often not enough to grab the mainstream attention, to grab a sufficiently large number of people and make them want this. I mean, we need to grab them somewhere else. And so when we started Collab, we actually thought about what do we need to do to get this into the mainstream? I mean, we want to actually make a change. We can change things in our community. And that's wonderful. But that may not really reach the six, seven billion proverbial mailboxes that Jeroen keeps talking about. We want to have something that actually drives a larger change. So how do we get there? And for me, I mean, you see that actually what we have like proper design now, you will see during the Collab Summit, we have really good ways of presenting things. Now, because we're trying to make this appealing, that comes from an underlying, deeper motive. And the place where I personally took inspiration from as well, and the team shares that feeling. I mean, I myself, I called it what I usually call the Tesla moment. The Tesla moment for me is, I mean, besides Elon Musk being a very interesting guy and so on and so forth, is defined by the fact that Tesla decided to approach the subject of electrical mobility in a completely new way. Before Tesla, virtually all electric cars were pretty obviously built by people who hate cars, for people who hate cars. They were dinky toys, puny in comparison. You can see they're still reflected in some of the electrical cars that big automobile manufacturers built, right? They look like a cheap copy of their actual car. Tesla turned that around. They said, all right, we want to actually make people drive electric cars and enjoy driving electric cars because that's when they will buy them. And that's when we can actually get people to ride electric cars and only electric. So they said, we're going to build the best car and it's also going to be fully electric. There's going to be no fuel option here, none. So you do something that is better that grabs people where they are by what drives them, what motivates them, what interests them. And then you do it in the right way. You put the right substance into it. In order to actually be able to drive a mainstream change. And I think Tesla has done that to a very large extent with the automobile industry. When you see the amount of change that Tesla has affected into the automobile industry in such a short period of time, that's quite amazing. I mean, now everyone says, of course, you're going to do electric cars. And of course, all the recent scandals also helped, but it was really Tesla that set this change in motion in the way they approached this. They built cars that people wanted, like really wanted. In fact, I know many fans in the free software community of Tesla cars because they're rather interesting. I mean, yes, they have some issues. They're not perfect. It's all good. But the way that they decided to approach it, I think we can learn from. And it's funny because, I mean, Tesla does some things rather right. I mean, when they said, oh, all our patents are belong to you. You can use all our patents to build your own electric cars. The one thing you can then no longer do is sue us for patent infringement. We get to use your patents too. We're really building a no fly zone for patents, which should sound familiar to quite a few of you, I hope, because in fact, we as a community did that first. That concept is actually the very same concept that the open invention network has taken. And now I'm absolutely certain that Elon knew that and has seen that before also because I mean, you see that his other very famous company, Space Access, also a signatory to the OIN. The point here is to create a no fly zone for patents in order to allow networks of innovation to happen, to drive innovation in a broader sense, which is exactly what we do as a community. We drive innovation by working together, sometimes even with our competitors in order to build better technology that actually helps people, that builds something that is better. And it's through that collaboration again that we actually are able to achieve so much more with a diverse community of people that isn't... We have, yes, we have a lot of people, but when you look at the actual number of developers and the amount of change that they affect, we as the free software community are so much more efficient than most proprietary companies I've seen. We generate so much more in innovation. It's quite astounding. We achieve a lot more with much fewer resources. So in fact, even though Tesla didn't exist at the time, the company, obviously a person did before, but Linux to me is actually another example of where we had something like a Tesla moment. You see, Linux has spread much to the dismay of some people, we know, on its technical merits to a very substantial part. It has also transported the principles of open innovation, of collaboration, of working together into areas that before were really, really close to them. I mean, if you ask which company is using Linux today, the answer is pretty much everyone, right? There's not a single one that doesn't have it anywhere. So this actually was, even though we may not have known it at the time, another such moment. So I believe we can learn from this in the way we approach what we want to do by seeking out these moments, by understanding that if we can combine the better with the right, that we can actually bring about much faster change than we sometimes do by only focusing on doing whatever we think is right, but neglecting to also understand that we need to build something that's also better. Because the innovation that we have had, I mean, think about the world as it would be without free software, right? I mean, most of the internet and Android handys, obviously, all of this has been brought about by this amazing big ecosystem innovation. We must be thinking about groups of innovation, ecosystems of innovation. And when we look at the hardware side, and that's a story that I personally currently find rather interesting, which is why I thought I'd shared with you, we see that there we've had really dramatic absence of that. I mean, we have ARM still, yes, but most of the world right now, most of the servers in the data centers in particular run on Intel, the vast majority. So Intel is controlling this, and Intel is a very proprietary company, as most of you know. So Intel is controlling that space to a very, very large extent. And whenever something is proprietary, and I'm not telling you anything new here, the question for us is, can we actually know what is going on in there? I mean, anyone who's been in our community for more than a short while will have seen several of the conversations about what is going on with the hardware platform on which we stand. In fact, even though we knew it a while ago, recently we finally got mainstream attention on that in the sense that Intel actually has a CPU within the CPU that talks over the network, it effectively layer minus two in a way that we don't know, right? It bypasses our operating systems, it bypasses all sorts of control measures. So we can't really change that code, we don't know when it's compromised, we don't know what's going on in there, and that from a control perspective for all of us is actually an issue. Now what I find so interesting is that there is actually other things going on in the hardware field. For me, if we were thinking about what it is that we actually would want, right, we would want an open architecture, right? We would want an architecture that we can all understand, we know what's going on. We want the chip design to be open, we would want to know what is in that chip, we would want to be able to build our own chips, we would want our own open firmware, and then we would want to run actual free software on top of that. Because we want a fully free stack, because a fully open and a fully free stack is the only one that we can fully trust because we can fully understand it. Of course the problem is, if we were to start from scratch building something like this today, we'd have a problem, right? I mean, the level of investment that you would need to build something that on the hardware level would be able to compete with data center hardware that Intel provides, that's rather substantial. I mean, it's not very simple. It seems dubious that anyone would actually ever do this because even Intel didn't get here overnight, right? The Intel architecture is derived from ultimately, personally, what was in Z80, a SEMBA programmer actually at some point in my life, very early on. When I then saw the 386, after having spent some time on the 68K, boy was I disappointed because it was essentially the Z80 on steroids. And it kind of still is. I mean, they cannot change too fundamentally. So there is another architecture, obviously, which is the power architecture. Now power coming from IBM also not necessarily the company that's always been extremely open. I mean, a lot of the monopoly abuse rules were written by them. However, they've also reinvented themselves quite a few times recently and what they have done and what has gotten very little attention, and in fact, most of people in our community that I speak to about this have never heard of this, is that they have put that technology into the open power foundation, which is an actual membership open foundation where people can join and can collaborate on building machines, chips, their own designs, working together on the next generation of hardware. And in fact, people do. I mean, Google is involved in their rec space. In fact, they've recently announced that their next data center machine is going to be a Power 9 machine. They've put out the design at the open power summit and said, here, this is our next design. The Chinese are now building their own open, their own power CPUs and they disabled some parts that they don't trust in the crypto side and so on and so forth. We don't trust this. This comes from IBM. We choose not to include these parts. We build our own CPUs now. I think that is really, really fascinating, actually. It's a really fascinating story and I think it goes exactly in the right direction. Because what we now start seeing is that people start building their own boxes. They start contributing to the designs and they start working on this and again, open power then also has this principle of non-aggression on the patent side, which gives us a very, very fascinating way in order to have data center hardware that is actually really, really strong. I mean, when you look at the performance of power versus Intel, power is actually quite interesting. It is very good at heavy compute. It is very, very good at parallel. It is really extremely powerful hardware. And so we've been working with IBM now on actually supporting power as well officially. I know Zuzo is fully on power. Zuzo is red hat just for the record. But ultimately the power platform has some properties that are very, very fascinating, especially when you can split your architecture up into many parallel threads, which is what Colub does really well. So for us, it was very easy to actually support that kind of approach. Because in order to trust, right, we know we want openness, we want control, and we want the ability to build our own. That is the necessary prerequisite for us to actually be able to trust. With Intel, that's going to be hard to have. Because ARM, and it has a lot of very good use cases, which is a lot better. And now there's also power, which is actually handled through a foundation, which I find is a very interesting model. And in fact, you will hear tomorrow that we've also joined the OpenPower Foundation because we find this interesting enough to get engaged ourselves. Because while it's not perfect, and the patent rules make everyone cringe, who's from our community, these are hardware people, they think differently about some things, right? The hardware and software are not always thinking the same way. However, it's a big step in the right direction. And I believe that as a community, we should really think about how to engage with this because we need the things that we can actually build upon. So we're going to be talking about that subject as well at the Colab Summit. In fact, tomorrow, I believe, Dr. Meyer from IBM, who is the director of hardware research from Birblingen, will be talking about power, what they're thinking about, the new architectures. Because everything is so small already, right? Making it smaller becomes no longer really an option. So now we're thinking nanotubes and all these fancy things, so he has some things to talk about that. And I would like to invite all of you, in fact, to come and have a look at this and get an idea of what's going on on the technical side there because it's really quite fascinating. You also have some stats between power 8, power 9, and so on and so forth. So there's some interesting stuff that he has to tell us on what's going on with the power thing. And of course, we're going to be talking about freedom in the cloud in particular, given that safe harbor is dead. Privacy shield is just about to die. Everyone expects it to fall apart very soon now, and there is no really good answers. So this is a time where for us as a community, which has answers to the question of, can I host this myself? Do I have to buy this only from one vendor in the US that's giving it to me as a cloud service or can I actually run this under my own control or my own servers or on service of a provider that I trust? We have very good answers to that. In fact, we are the ones who have the best answers right now. So we've invited one of the lawyers from Jotbebe, Berlin, which you might also know because it's the company where Til Jäger works, who's been the number one GPO enforcer in the world. And one of his colleagues is going to come and join us and talk about safe harbor, privacy shield, and where he sees that going at the current state of affairs. And of course, as we've done yesterday in Vienna, there's going to be plenty of beer and meat. So I hope you'll also join the barbecue. And I hope you will all grab myself or any of the other fellow collabians and sit with us, drink with us, and talk with us about how we can actually use the moment that exists right now to drive openness further down as well as further up. The time for that has never been better. So let's do that together. Thank you very much. Yeah, sure. If we have questions, go ahead. Questions? Oh, come on. All right, well, thank you very much. Appreciate it. Thank you, Doug. Come tomorrow, the Collab Summit will be in gallery tomorrow, and we'll see you there. Thank you. Thank you. Doug.
Georg is the CEO of Kolab Systems AG and is one of the leading entrepreneurs in the Free Software world: Self-taught software developer, traditionally trained physicist, author, and founding president of the Free Software Foundation Europe (FSFE) and involved in most of the crucial battles for a society that is based on openness and freedom.
10.5446/54580 (DOI)
It's a pleasure to have him back. The last time we saw him was in 2011. And he's a techie and his daily job does a lot for Lennox and a lot for our community in helping get what we produce out to the world. And so I want to welcome Suza's president of strategy alliances and marketing, Michael Miller. Thank you. Thanks very much. Now, I heard yesterday that everybody watches this cat to determine whether or not what the speaker is saying is legitimate, right? So if the arm stops, that means I'm saying something wrong, right? And as long as the arm is going, everything is good. OK, I'm going to stick with that. Now, I'm going to try to handle a microphone and a remote at the same time. I've never attempted this. I don't know how to juggle, so we're not sure how this is going to work out. All right, so that didn't work. OK, so I'll start out by saying my name is Michael, and I've been using Lennox. Nice to meet you all. And I've been using Lennox for quite a while, though I have to admit that when that picture was taken, I'm so old that when that picture was taken, there was no Lennox. When I was that young, it was the time of Atari. Anybody remember Atari? Yeah, Commodore 64. Yeah, so I didn't have the right pictures. But I did have, I was using Lennox, I would say, about 18, maybe 19 years. And I had an early stage of my Lennox usage, kind of my formative youthful stage. And I experimented with all kinds of different distributions. I used Caldera a little bit. I used Red Hat. I went through a wild and crazy Gen 2 phase, because I felt like I had to build any computer that I allowed in my house. I felt like I had to build it myself and then run Gen 2. And that got a little bit high maintenance. So I moved on from there, used Ubuntu for a while. And then as I became older and wiser, I discovered OpenSUSA. And that was about six years ago. Thank you. I found my way to the light. And I've been using OpenSUSA ever since. Now, not only am I using OpenSUSA because I love using Lennox, but as you guys know, it's part of my job at SUSA. I can use OpenSUSA as my work OS. So I use that as my day-to-day operating system for everything I do at work, as well as what I do for my own personal use in my hobbies. And I've got to tell you, I am really, really enjoying Leap. I've used a number of OpenSUSA releases over the last six years, and I enjoyed all of them. But Leap has been awesome. The hardware enablement for the machines I use, the performance, just everything about it has been working fantastic for me as a really stable, productive operating system for both my personal experiments and my work environment. Now, someone's probably thinking, OK, what about tumbleweed, right? Whenever I wear the Leap t-shirt or I mention Leap, somebody says, yeah, but what about tumbleweed? OK, so tumbleweed is awesome as well. I keep a machine under my desk at my home office that's my tumbleweed server. And I do stuff on there, tinker around, and experiment. So I do use tumbleweed as well. So I've got to give fair share to tumbleweed, because it's pretty amazing. Now, part of what I do in my job at SUSA is to give presentations. I go around to different meetings, different conferences, and do keynotes and things like that. And as a techie guy, you find that when you do the same thing repeatedly, you think to yourself, hey, maybe I should automate that. Or maybe I should somehow make a tool for that. Or if you do, there's four different ways of doing the same thing. I should find one way to do that and make it consistent and build a tool. Anybody familiar with that concept? Something like you asked maybe? All of us as engineers, we want to solve those problems. But when it comes to presenting, especially with keynotes, that doesn't really work so good. When you do that, you end up with something that I like to call YACC, which means yet another corporate keynote. We've all seen them. I mean, and I have to admit, in my years, I have delivered a number of YACCs. And I apologize if any of you were subjected to those. But we're here at the Open SUSA conference. The theme is have a lot of fun. So I wanted to be really sure I wasn't delivering a YACC at you guys, especially at 10 AM in the morning after an awesome party last night. So I wanted to do something a little different for an Open SUSA audience than I would for a typical corporate keynote. So as you might have noticed with the zooming around and stuff, that I'm using Inkscape and a plug-in called Sozi just to kind of have fun and experiment, do something a little different as part of the presentation. But what I really want to talk about is SUSA and Open SUSA. And I have to say, I have to acknowledge Richard for coming up with this clever phrase of Open SUSA and Open SUSA. You know those Brits, they have such a way with language, don't they? It's kind of too bad that we may not see so much of them anymore, but they're so clever. And when they use expletives, when they're swearing, they come up with these crazy swear words that you've never heard of before. In my team at SUSA, actually, we occasionally have honorary swear like a Brit day, where on IM or IRC or even on the phone, we're obligated to use British expletives or make them up. Because the other thing about the Brits and swearing, you could just make shit up. And it sounds like something a Brit would say. You could make up make-believe words. And you just have to have the right tone. Blimey. So anyway, this is really what I want to talk about today. Now, Doug referred to OCS 11, which was the last time I was here doing a keynote. And that was about five years ago here in Nuremberg. And we had lots of fun at that conference. And first, I want to remind everybody of a few of those special moments. You might have remembered Richard with the long green hair. How many people were at OCS 11? OK, so there's a fair number. Yeah, Richard did indeed have long green hair. And he still has the long green hair. We had Andy with the kilt. There's Andy's out there. He's not wearing a kilt this year. I think both these guys can pull that off pretty well, the hair and the kilt. We had a mechanical bowl. And I've got a great picture here of Yoos on the mechanical bowl. I also found lots of pictures of people that had fallen off of the mechanical bowl. And I kind of relate that back to the beer. There was an awful lot of old-toed OpenSUSA beer at OCS 11. And you combine that with a mechanical bowl, and you're going to have some fun. So I have to thank Kostas for letting me borrow his pictures. The guy took like 600 pictures at that conference. And they're all still up on Google+, if you want to see them. They're really fun to see. All right, so moving along here. At that time, we were at OCS 11. It was a lot of fun, but it was also a very troubled time. There was a lot of fud. There was a lot of fear, uncertainty, and doubt floating around. This was right after Attachmate had acquired. The Attachmate group had acquired Novell. Guys, you remember when that happened? People thought, oh, well, this company won. Who's Attachmate? I've never heard of those guys. And why the heck are they acquiring Novell? And do they have any idea what the heck to do with SUSA? So there was some real concern about what is the future of SUSA and what is the future of OpenSUSA under this new ownership model. And I was coming from the Attachmate business. I was part of Attachmate for, at that time, probably 11 years or so. So it was really nice to be able to come to OCS 11 and be able to talk about Attachmate's plans and their intentions and what they were thinking about SUSA and Open SUSA. And I think we all found that what happened after that acquisition was really good for SUSA and OpenSUSA both. Some really great stuff happened. So for example, SUSA became an independent business unit. So prior to that, it was a product line within the Novell business, not its own independent corporate brand and business unit. So the first thing the Attachmate group did was separate SUSA off and allow it to run as its own independent business unit, which means we could do all kinds of great stuff, including setting up our own website, SUSA.com. Until that point, there was no SUSA.com. There were just SUSA product pages on Novell.com. So now we have SUSA.com. We also created our own SUSACON, our own user and partner conference. And I think some of you actually have been there. We've had a lot of great Open SUSA support and presence at the SUSACON events. These are pictures from last year's event, which was in Amsterdam, which was awesome. Having an event in Amsterdam is a really, really good idea, I would say. I think we should do that more often. This year's event is going to be, you guys are going to think this is totally crazy. This year's event is going to be in Washington, DC, which is not the crazy part. But the crazy part is that it's going to be the week of the US presidential elections. Kind of crazy, isn't it? So you might think, wow, DC is going to be a zoo that week. The reality is that the town's going to be empty. The presidential candidates, or I like to call them actors, because it's really just a bunch of theater. The actors are all out in their swing states trying to move the needle and get their swing voters. Congress is not in session. DC is a ghost town that week. So logistically, it's actually a very good week to be there. The vendors and the venues, everybody loves having an event in town that week. But I'll have to say, we will have to have a little fun with the politics. I just can't resist. Now, I have a rule when keynoting, particularly if it's a yak, I don't talk about politics. But since this is not a yak, I'm just going to say one little thing, which is because I can't help myself. I live north of Seattle, back in the US, and I'm about 20 miles from the Canadian border. You can see where this is going already. So I fly off for the week for Suzacon. And my family has already informed me that, depending on how things go, when I come back at the end of the week, they may not be there. They may have already hightailed it over the border. And I might have to go on up to Vancouver, BC, and try to find them. That's all I'm going to say. No political opinion there at all. All right, so in addition to creating Suza as an independent business, creating the website, Suzacon, all of that stuff, we were able to start growing the business. And this is the essential thing. By doing those things with Suza, we were able to start growing. And that was the foundation of all the other good stuff that followed. We've had solid year-over-year double-digit growth from that point forward, and we still do. And that growth has allowed us to do some really cool things, some innovation. Many of these innovations, and I just picked just a couple cool examples. There's so many of them. I didn't want to try to list them all. But many of these things are things that we do in collaboration and in co-development upstream with the open Suza community. We're able to do this kind of innovation because the business is able to reinvest in itself because we're growing. And also because we were growing, we were able to actually join into whole new market spaces that we weren't in before. We were able to expand beyond our core enterprise Linux business. And we joined OpenStack, for example, some years ago. We joined as a Platinum founding member. And our own Suza's Alan Clark, who many of you know, is the chairman of the board of directors for the OpenStack Foundation still. He's been the chair of the Foundation since its beginning. So now we have an OpenStack distribution as well. And then we also engaged in SEF in the early days of SEF, back in the ink tank days. We engaged in SEF before many others did, and now have a really exciting distributed storage product. We just released our third version of that technology as well. So now we've expanded the scope of what we're doing in these new areas, which also expands the scope of what we can work together on and collaborate on with the Open Suza community as well. Now, everything was growing great, growing, doing all this stuff, and bam, another acquisition happened. MicroFocus acquired the Attachmaker Group. Now, putting aside the fact that it's a UK-based company and I'm not going to speculate on what that may or may not mean at this time, I don't think there was a lot of uncertainty and doubt this time. Again, putting aside yesterday's announcement, I think everybody realized this time when there was an acquisition, there wasn't, oh no, what's the future of Suza? I think it was very clear. And it's clear because the MicroFocus leadership team articulated this during the acquisition process, and then they followed through on it afterwards, that Suza will remain an independent business unit and brand. In fact, I would say we've even become slightly more independent and we've increased our investments in a lot of ways. We've maintained our core leadership team. Nils Brockman, who was our leader, is now CEO of Suza. His team is the same group myself, Ralph Floksa and others. And then we've actually invested in expanding the executive team. And a guy named Thomas just joined us as CTO, and he was actually here this week. I hope some of you met him while he was here. So we've actually become a little bit more independent and actually expanded that same leadership team. And that growth and expansion is great, but what really matters is investing back into the company in the form of people. So right now, if you went out to our job site, you would see over 110 current job openings. We are hiring like crazy. In fact, I would say that we'd probably hire on top of this about another 100 people by the end of our fiscal year. That is a lot of people. And it's across all different functions in the organization, but the one area where I would say the majority or the biggest area of hiring, of course, is engineering. There are a lot of engineers coming on board to work with us and work on all those new technologies. And that brings me to what are we working on? So what's really cool, if you look at the new stuff that we're doing and that we can all collaborate on together, it's a whole new spectrum. It includes things like NFV and software defined networking, platform as a service, Cloud Foundry, high performance computing, containers and Docker, all kinds of exciting stuff. And we're bringing in new staff and we're developing new technologies and strategies. And our relationship with, our collaboration with, and our co-development with OpenSUSA is fundamental to all of these things that we're doing, whether it's OpenStack and Ceph, whether it's these new technology areas. Our relationship with the OpenSUSA community is the foundation of how we innovate and engineer all of this stuff. Now, I feel like we've come a really long ways if you look back at OCS 11 and where we are now, we've done some amazing things just to get here. And we've stuck through some hard times together. There was a time, I think, when OpenSUSA community was in doubt about whether this was really going to work. We came through that and we're growing together. And most recently, we did something that I think is like revolutionary in the industry. It's a completely new model for co-developing and innovating together. And you guys might have heard about some of this. For example, Kulu presented some of this about our shared core package concept at Fosden this year. It's really the foundation of a whole new way for us to innovate together and get an increase in agility and innovation while maintaining stability at the same time. And what this allows us to do is create a whole range of distribution options that service all kinds of different needs. Everything from personal users, people experimenting, knowledge workers, makers of all kinds, developers, people creating stuff, IT infrastructure, enterprise professionals, as well as the largest super computers in the world. This co-development produces a distribution, one of these whether it's Leap, Tumbleweed, or Slee, that fits all of those different needs. And we can move faster, innovate more, and bring more technology to the world faster because we have this shared core in this co-development model. And the first time we tried that was really Slee 12, SP1, and Leap 42.1. And that really worked well. I think it was just amazing. I think we've proven how well that new revolutionary model can work. And it worked so well that later this year, as we do the next batch of releases, we're going to move even closer together, where we'll be sharing a kernel, systemD, Nome, and we're going to keep building on this model. And it's quite amazing. In fact, this is, I think, even cooler. It's the fact that we can now start Slee 13 when the development cycle starts on that. We can start that based on a Tumbleweed snapshot. And if you compare that to what we used to do, and we used to have to wait for a stable OpenSUSE build release long before the Slee development started, because we needed time to adapt and tune before we were ready to even start the Slee development. So you had this big gap, this blackout period, where you couldn't incorporate new innovation into Slee because you needed that blackout period. But now, because Tumbleweed is this consistent, standard, stable build, the Slee development can start and grab the latest Tumbleweed snapshot and just go from there. There's no blackout period, which means we can incorporate more of the very latest upstream innovations directly into the Slee development process and bring more innovation to market faster and maintain stability at the same time. And that's really the heart of what this model allows us to do. So I'm really excited to be part of this. I'm really proud of what we've all done together. And I look at this both from the perspective of an OpenSUSE person who's passionate about OpenSUSE and dedicated to OpenSUSE as a user. And I look at it from the point of view of a SUSE employee and all the great things that we can do for our customers and partners. And I feel like we're entering a whole new era where our trust in each other, our co-development, our innovation that we're doing together is going to allow us to do all kinds of cool stuff and keep having lots of fun. So with that, I would like to show how everything that we talked about, everything we do together, is all part of the chameleon. We're all green on the inside. We're all in this together. And let's keep doing that. Thank you. OK, so speaking of fun, as you guys probably know, we really enjoy doing music videos at SUSE. You've probably seen some of them. You might have seen the what does the chameleon say one or the hot patching one. Well, I've got a video I wanted to roll for you guys that we literally just put through production this week. Nobody else in the world has seen this. Maybe five people have seen this. So I want to roll this for you. And I'm going to try not to screw this up. Why am I not? Tony, holding this mic has just gotten me all out of whack space. That should do it. All right, let me do this. Find my mouse. Trust me, you're really going to love this when I really start playing it. If I can get there, because I can't see what I'm doing. Well, now I really messed it up. All right, guys, bear with me for a minute. Richard's going to save me. It worked in testing. It worked on my machine. So I thought hitting space would just do it. Yeah, me too. Yeah. There we go. OK. Yeah. Yeah. Yeah. Yeah. Yeah. Oh, yeah. We're all free. All right, so that was called Code Together. I think that's pretty awesome. Now, however, and I think the cat approves, that's good. I, however, think we probably need to do one thing. We need to get Richard with the long green hair into that video. That's what I think. I think that would make it even better. All right, so Richard, do we have time to do a little Q&A? Do you think? Should we do that? Five minutes. Wow, that's all right. What do we got? Any questions? How'd you like the video? Oh, no, no, no, no, not Andy. No. Oh, no. Damn it. Great that the emphasis is on collaborating between both sides of the chameleon. But there are certain open source based projects that are products from SUSE that haven't completely brought into the messaging, in so much as the open source projects that these products are based off of haven't been integrated within open SUSE in its entirety. So if I take Leap or Tumbleweed and try and install that project based on that production, it's not there. Or it doesn't work properly as it should. Other moves afoot to make sure that everyone's seeing green all the way through? That's a very good question. I don't know if Ralph is comfortable responding to that. You might have a more intelligent answer to provide than I would. I'm so glad Ralph is here. That's the moment we're engineering. So I can tell you that my guys actually want to contribute a lot of to open SUSE. And if I look at my cloud team, they usually have the packages there first and then put them into our open stack product. It is all a matter of time and you saw all the open positions and open positions don't do work. They're just open slots. So I think my promise is once we have more staffing, I think you'll see more contributions back into open SUSE. The spirit is there, the willingness is there, but the guys are just overworked. But from their heart and from their thinking, they definitely want to do that. And we will fully support it, right, Michael? That's exactly what I was going to say. Thanks, Ralph. Yeah, Andrew, I just wanted to add to that one. I'm one of those guys. SUSE Manager isn't fully as easily to install as it is on SLES. We definitely plan to do more in that respect, especially because we are starting to take over the upstream project. Spacewalk is mostly a SUSE project these days and we will have to find a new government's model. I guess one of the things we probably will do is fork it into an open SUSE project. Actually, it was mainly a matter of resources of setting priorities, but yeah, we definitely want to make it more open in that regard and also accept contributions if anyone wants to manage, I don't know, Raspberry Pi's with it or so to open it up so that maybe at some point we can incorporate that into the product. Yeah. Any other questions? All right, Michael, I want to thank you for coming out and we appreciate your talk and you being here. Thanks, everybody. Thank you. APPLAUSE
Michael Miller is the President of Strategy, Alliances & Marketing for SUSE
10.5446/54583 (DOI)
First of all, thank you very much for the invitation. Think as last time in Zurich, very cool place here from a cultural perspective, not so much from a temperature perspective of course, but I think we'll get that done. My name is Wolfgang Meyer. I'm head of the hardware development team in the IBM Research and Development Lab in Biblingen. My team is about 300 engineers, mostly logic designers. We heard about Power 8 a little bit and this is a significant design. Components of the Power 8 actually are developed in my team. Sometimes it's not that well known that a lot of processes of development are still done here in Germany since of course Intel has most of their development teams sitting in the US. What I want to do today, I want to talk a little bit about Power 8. I want to talk a little bit about hardware technology in common, not only Power 8. As we have heard before, I listened to the presentation and there were two parts. The first part was about the Spread Street stuff and the roadmap and then there was the developer called the fancy stuff. Of course, pretty much when I hear software guys talking about the fancy stuff, there is a kind of alarm bell ringing in myself because this means these guys need more performance in the future as usual. Now, if we really want to deliver more performance, we have to think a little bit about Moore's law. This is what I want to start with because I think when we look at the last 30 years or so performance was not a big issue and the reason for that was we had very huge steps forward in semiconductor technology. Gordon Moore did an old statement in the 1960s which said, okay, within the next decades we can raise performance every two years by a factor of two. I think in his equation, the first thing he was focusing on was number of transistors on a chip. I think when we look at the projection, Moore's law pretty much was fulfilled almost perfectly over 40 years now or almost 50 years. I think what Gordon Moore really meant or how the Moore's law is integrated, it's not only about the technical stuff, it's also about the economical stuff. This means we can produce more performance actually for more or less the same price relation as we had in the almost in the 60s. I think if we could not resolve this equation in that way, the complete digital development of the last 34 years would not have been possible because we couldn't have a four-year that's I think what we should keep in mind. That's also why I picked this one and called it Moore's law. What we see here more or less pretty much since the 1950s, we could lower the cost of single calculation more or less and divide it by two every two years. This means today, when we talk about petaflops or a big number of flops actually, then this was only possible because we had all this development. Now as a hardware developer, looking at that a little bit closer, what we see is the patterns have changed over the last five to six years. Since besides the number of transistors we had on that hardware, the other strong source of performance actually had been frequency. Frequency, there had been another law which is not that popular, it's the so-called denerd law. This could be raised because with the small transistors, actually the power consumption of these transistors also declined. Now as we are so far down in the atoms already, we have to get closer to that, as we are so far down in the atoms, we could not raise frequency that much over the last five to ten years. The reason for that is that there is a pretty strong dependency between power consumption and frequency and more or less with frequency we have arrived at the break even something like five years ago. What you see is of course Intel is interpreting that in a little bit different than we do. We have some special technologies but more or less between three and five gigahertz is where we have to more or less give up on the frequency level. This means when we want to allow development like this and raise performance and pretty much support all your creativity in the future, we have to rethink a little bit how we can produce performance so that this equation will stay alive because if this equation is not alive, then the affordability of all the things which we call big data and all the other hype is much more difficult than we'd like to have that. What I want to do today a little bit is to demonstrate and show some features, some thoughts, share some thoughts on how we can catch up with this equation also in the future and what power technology and IBM of course is doing in order to do that. One thing is what we see here. This is still a technology feature. So far the main driver always was semiconductor technology and the question is can semiconductor technology not contribute anymore to performance improvements or to cost performance improvements? The answer is well, there are still some features which can help here and this is a typical one of that. SV cannot raise frequency, we go towards parallelization. This means you have all seen this trend. You go from double core to eight core to 12 core designs. This will also continue for a certain while because we still have the topography actually which allows us to put more transistors on the chip. But in order to make more cores on the chip also efficient, you have to build an infrastructure on the chip which helps you to get these engines really in the mode where they can work efficiently. One very important part in that is that you have cache structures on the chip, cache structures actually and big cache structures. What we see here is a technology which allows you to build much bigger caches actually on your chip. As you will see later on, of course Power 8 for example is very famous for its big cache structures. Now, that doesn't come by itself, this comes by this technology and this technology is something we do especially for our chips. It's called DeepTrench and it works in that way that usually in order to realize cache structures on a chip you use so-called SRAM cells and SRAM cells consume something like six transistors. What we found out is if you are capable to build a relatively high capacity on a chip, you can also build a cell which consumes just two transistors and so-called DRAM cell. DRAM technology is used in memory. So far it was not possible to integrate that directly into the chip with this technology as possible. What you do is DeepTrench means you edge a relatively deep hole into the silicon and with this deep hole you are capable to build a high capacity. You blow some metal dust into that and you have a perfect capacitor and this means this allows you to build these two transistor DRAM cells on the chip and this means you get a cache density which is a factor 3 compared to SRAM cells. As we will see later on in the comparison, this allows us to build these extremely high caches or big caches. Now of course usually in these equations there is a burden. You have to pay for it. The burden in this case is the edging process in order to burn the hole takes something like six weeks. I mean it's just for micrometer but for micrometer is a relatively deep hole in a processor. So these six weeks is something which you miss in your development cycle and as we of course all struggle with time to market, this is something which specifically we as developers of course we love to have the big caches but we hate to pay six weeks for it actually in our development cycle. So that's the equation around here. And of course another thing is you need to be cautious but as we have so much traffic, I mean we have a super scalar out of order architecture on these chips now. With a multi-course you can imagine it's not quite simple to exchange all the data in between. This morning when I pretty much traveled from Stuttgart to Nuremberg, yeah I got a proof again on what it means to have congestion on the Autobahn in this case. I mean you have a similar situation in a processor today, okay. You have several course, all these course have a lot of different units and there are a lot of guys actually who want to run their threads through this course since we do as we see later, we also do simultaneous multi-freading. And of course it's extremely important to have infrastructure in place, buses in place which allows you to keep these things efficient. And this is also a difference actually which we have compared to other chip producers. We use another stack actually, what you can see on the left side, this is pretty much an electron scan microscope of a chip which we have cut in the middle. And you see there are 15 layers of metal and pretty much down in the silicon today we have something like 4 billion transistors and of course the more layers you have and the faster connections you have the more efficient you can connect these transistors with one another. And we have special copper layers in there also the stacked layers and the number of layers, 50 metal layers is much higher than with other windows. And we will look at performance later on a little bit as we heard before, power edge is showing pretty good performance numbers. That does not mean that Intel doesn't, okay. And we try to explain a little bit what the difference is. But this is also one of the technology reasons why we still can get into this good performance numbers. Okay, then what I mentioned before of course is we get more parallel, okay. This means when you look at a usual processor, well at a power 8 core specifically because what we build is a really heavy load cores, okay. These are not any low power fancy stuff actually we build big machines, okay. That's our goal and that's pretty much, it does not mean that a big machine always is better than a small machine but that's how we understand our business, okay. Now usually when you run this mighty cores, of course they consume a little bit more power, we see that later on. But usually with today's implementations of software where we do not take a lot of emphasis on parallelization, it's pretty difficult to completely utilize the complete core, okay. This means when you look at the picture in this, in a regular processor core we have high number of different units. And when you run actually a threat through this software threat through this unit then mostly only a small number of these units really utilize. The other part is more or less waiting until the other guys actually are getting done with their stuff and then they get into play again with the next cycle, okay. That's how it usually works now of course. I mean pretty simple, the question is why don't you utilize these resources? And most of the processor vendors actually do that meanwhile it's called hyperthreading with Intel. With Intel we call it simultaneous multithreading, this means now you implement also a supervision logic actually which is watching several threats which we throw on the score in parallel, okay. And this is what you see on the right side. You have a blue threat and you have a red threat and you can run these things in parallel. Now of course sounds very simple and for sure is the right thing to do if you want to support this parallel structures but as a user you have to use a feature like that a little bit more carefully than the things before which we have heard before. Big cash is, I mean you don't have to take a lot of into regard when you write your software when you want to exploit a feature like that in the right way you have to because it does not fit into every workload approach. It only fits if parallelization fits to the workload and it's also something which has to be kind of balanced in regard to the configuration, cash sizes, memory sizes and to the specific workload. Let me give an example. We had a POC running with Yundex. Yundex is a huge Russian search engine, pretty much the counterpart to Google in the more curillian letters space. And they started to work with PowerAid on their own and were a little bit disappointed actually at the beginning to, because they could not really figure the sweet spot for for example, single-tandles and altifread. So we did some POC. They had to do some adjustments in their code and on top of that they had to learn how to learn that if you run on highest SMT mode that does not mean you got best performance. In differentiation to Intel we have different modi. You can run SMT1 which is pretty much non-SMT. SMT2 so you can run two SMT threads in parallel SMT4 and SMT8. And the sweet spot of course does not mean SMT8 is always the best choice. In their case for example, they could get two performance improvement with this a little bit trickling in the code actually which was around factor 3. When we got to SMT8 mode and tried that actually we dropped down again to something like 1.5. So it's something you have to take into regard. Also at the beginning of course we got some surprises since of course you have to pay a little burden in order to keep the threads actually separate from one another. This means if you have a benchmark which is focused on single thread performance you may get a little bit surprised about that because I mean of course we have technology improvements and we also raise single thread performance but in this case let me do a simple example. You have two threads actually and each thread takes one second. So when you are in single thread mode of course the complete runtime is 1 plus 1 is 2 seconds. Now we say okay great I have a simultaneous multi-threading I throw it in this mode. You get to a little bit burden and we say okay each thread now takes 1.2 seconds. So it's still better. I mean you got from 2 seconds to 1.2 seconds which is improvement. Now if you have a benchmark which is focusing on single thread only you get worse because you got from 1 second to 1.2 seconds. So you have to be very careful also when you try to leverage a feature like that actually to exploit it in the right way. And I think this is also kind of the pattern we see more and more. We need a stronger collaboration between software guys, hardware guys, virtualization guys actually because the stack function is taking more and more focus in the complete performance evaluation and the efficiency evaluation. So this one thing of course there are other features which maybe are a little bit easier to exploit for example like CND engines. This is just vectorization of floating point units and things like that. Another very interesting area is the field of hardware acceleration. This means you really implement special function components. The reason why hardware acceleration has a bigger height within the last 5 to 10 years is one part is FPGA technology has made good progress forward and the other part is now that we have this big number of transistors available on the processor we pull in more functions into the processor scope. This means in the past we usually had a proprietary interface out of the processor into IO hub function which was a separate chip. Meanwhile we implement this chip directly into the processor and are on the step actually to include that cache coherently to the rest of the caches. This means you got way faster IO attachments and you got way smaller latency and this means you can leverage these acceleration techniques much more efficiently than you could in the past. Overall I think there are different types of accelerators which we distinguish in this context. Accelerators in common is a big field. We also have on processor development. This is also on processor accelerator. This is also a trend we see. There we reserve some of the transistors actually and implement special functions. You see that on the power for example for compression purposes or for encryption purposes we have special acceleration engines actually which do that. This what I want to put a little bit light on here is specifically the area of GPU attachments, GPU graphical processors have a good ecosystem. This means there is a programming language you can use and there is also a more easier way actually to integrate that into your operating system flow by having device drivers and all that stuff. So from that perspective GPUs are very interesting right at the moment. For example the Foschwitz-Syndium-Gürlich runs a huge super computing project in the human brain space where human brain functions are simulated. They do a very sophisticated balancing between power 8, compute power and GPU compute power for parallelization purposes. These graphical processors they are not quite the powerful as for example power 8 core but of course their number is relatively high and they do not consume too much power. So that's a good thing. Of course you need some knowledge how to integrate that into your flow. The other very interesting part is the so called FPGAs. FPGAs are special hardware which you can program in a certain language. Altera has come up with a language called OpenCL which from our perspective is very well suited to do quicker and fast, quick and dirty I would say prototype development. If you want to use an FPGA in a product like environment it's a little bit more difficult because their OpenCL certainly is not the best choice because you don't get all the capabilities of the FPGA. There you should really use languages like VHDL or Varylock because they make it more efficient. For us it means then you need some experts. For us as processor developers it's not a big deal but for software development guys getting in that deep hardware language is of course a little bit more difficult. So from that perspective the consumability of this is certainly a little bit more complicated than with GPUs but from an efficiency perspective you can accelerate specific workloads, factor 1000 for example for multicolor simulation we have seen effects like that in the high frequency playing stuff but the effort of course also is relatively high. Then of course there is some other stuff I don't want to go into too much detail and you can use these fast interfaces for a memory database processing actually for example using non-volatile storage like flash instead of dims and stuff like that. There is of course a very interesting new area I just want to do a little bit here but say some more words at the end. This is the area of neural synaptic. This means there we use completely new processor architectures which don't run with high frequencies but which are very close aligned to the human brain function and for that reason have power efficiencies which are factor 10000 higher than the usual for dynamic architecture. I will talk about that a little bit later on but the idea here is also of course to use this type of hardware then for special hardware acceleration purposes for special work specifically pattern recognition in this case. So these are some of the sources actually which you can use to gain more performance to resolve the Moore's law equation in a good way. Now of course in power 8 just examples of that all of these features are implemented. I talked about the big caches. I talked about the SMT as I mentioned before we have SMT248 mode. We are just rethinking the SMT8 right at the moment we could not find a lot of applications which really use SMT8 so maybe it makes sense to stay somewhere at SMT4. We have a CMD engine in of course we have extremely high memory bandwidth. In order to gain this memory bandwidth actually we have built a special chip. It's up to 230 gigabyte per second. I mean of course talking about SAP HANA for example everyone sees that doing most of the work directly in exchange between processor memory is getting more and more interesting so for that reason we have implemented that. And then what we have done on top of that we have optimized this direct PCI interface which gets out of the processor and we call that coherent accelerator processor interface. We will continue to improve that in the future as we see accelerators getting more and more importance. The background on that is this usually when you run a PCI interface directly from the processor you use a traditional IOProtocol which makes the thing a little bit slower and the latency a little bit higher because you don't have a feature like for example an InfiniBand which is called RDMA. You have more or less load store functions and these load store functions do not allow to have real cache coherent integration of these external components. Now we have integrated a protocol layer which is kind of work around that and this means with this coherent attached processor interface you can now integrate external components and have them cache coherently integrated into your overall workload. So this means you get something like factor two to three latency improvements versus a regular just plain vanilla PCI attachment which you have in this PCI. Okay and we have come up with a Power 8 Plus version. So this copy interface has a lot of advantages in playing together with FPGA acceleration in order to get the same advantages also with GPU acceleration we have implemented another link which is so called NVLink and V4NVIDIA and this is for example one of the features which is massively used in this human brain board jet in the first instance of NVLink. So just to show up a little bit the differences. Now when we compare that as I mentioned before first of all I mean I have a lot of respect for the Intel driver group. I mean they really do a good job. They really give us a hard time to stay competitive but I think what's important is we have a little bit different philosophy than Intel has and I want to use this chart and also look at the performance numbers to demonstrate that a little bit. What you clearly see is Intel has more or less given up on frequency at something around 3.4 GHz. As you can see this comes in something like 150 watt envelope which is doable. And we have a little bit different philosophy. We said okay we are capable to build system designs because we have a lot of experience in the enterprise machines. We go up to 250 watts. We pay that but we go also up to 5 GHz. So you can certainly resolve this equation in different ways. Intel is doing that in this way and we do it in the other way. The reason why we can run this 5 GHz is what we have seen before. This deep trench capabilities, this deep trench features which I brought up in the context of caches. You can also use that for frequency decoupling or for circuit decoupling. This means if you really want to go up to 5 GHz you have to have a feature like that otherwise you are not allowed to do so. And you also have to burn down your gate oxide stuff. Then you see we have different cache structures which are much bigger. We have higher memory bandwidth actually. We have addressable memory which is much bigger in the range of 16 terabyte for example for the E880 system. And when you put all this together and look at that more from a parameter perspective then RPE2 is actually a relatively good benchmark. RPE2 uses kind of empirical data and allows you to compare Intel architecture to power architecture and also compare it to earlier generations. What you see is Intel is running a strategy that they say our core performance more or less stays the same. This is also now in the Heswell Broadwell transition of their Xeon architecture. As you can see they even go a little bit lower now at 2.2 GHz. We with power 8 we go up in per core performance. So at the same time of course we are at somewhere at 3.5 GHz. When you remember the chart before the philosophy of Intel is to go towards a higher number of cores instead of just raising frequency. So that's pretty much what I want to bring up here. It's different approach and with this different approach you can see that when you compare benchmarks with this different approach you get results where for some of the benchmarks which kind of reflect certain workloads you are more or less you got to watch. But then for other specific workloads you got performance increase factor point 2.5. I think the important thing is you really have to learn a little bit about power 8 in order to figure actually which is the right workload to fit to this engine. It's certainly not in that case that every workload is the right thing for this engine. You need to invest a little bit but if you find a workload which fits to that you of course can get to way higher efficiencies than with other hardware for example. So that's what I want to bring up a little bit today. So now looking forward it's getting more obvious that and you heard I talked a lot about cache sizes and frequency and all that stuff going even further actually out into the future. I think this discussion will still be vital but there will be more and more other areas which will get more attention specifically in the micro architecture. I mean what we think about right at the moment is to come up with more flexible structures. Actually you see the roadmap here. We have also new semiconductor technologies but I think the real discussion right at the moment is about can we come up with flexible core designs which for example allow you to combine cores in a kind of modular way because what I described before in a certain sense you have kind of resources which you can share on a processor and the question of course is how efficiently can you balance these resources against each other. The idea is to build kind of modular cores which allow you to run in a kind of one to four mode so you have either one smaller component actually which does not get too many resources but you can combine it with other cores for example pull it together as a four core unicore which then has a higher number of resources or you can for example use just three of these minor cores and combine it with a special purpose engine like an accelerator or stuff like that. So these are the discussions which we have which we see right at the moment for future products. Of course we are right at the moment we are working on the Power 9 which is a 14 nanometer chip. My team fortunately released the first release to the FAT to Global Foundry last week and this means within the next three months we will get the first hardware and then we get into a mighty test cycle actually and so pretty much I think in the 2017 timeframe we then will see the first Power 9 chips and Power 9 will already realize some of these ideas which I just alluded to. Going further out then of course what we call Power 10 there are already plans actually concepts started which then will go even more intensively into that direction. Maybe just to give you a little bit of impression about such a project usually it takes something like four or five years to develop a process like that before we get to Martin's needs. You start with a concept phase for example for Power 10 which then will show up somewhere around 2020 we are in concept phase right at the moment. After and this takes something it depends a little bit on how fast actually you can integrate all the various characters who have their opinion actually what the processors should look like. After that then you go into high level design you really break it down to is this a 20 core design or is this a 24 core design things like that. Then you get into implementation you write all the logic design which takes something like half a year and then somewhere around 18 months before you can go to market you really send the design data into the FAT and then the FAT of course you have a pretty complicated process actually they build masks they set up all the photography environment they run certain tests in order to prove that the technology really is capable to realize your chip in the end. And then you get back to the first hardware you do the initial testing which takes something like 3 to 4 months and then you have one other shot before you go into the GA where you can fix all the bugs you still find in such a process. So this means you always have a very long lead time until you get into the market and this means right at the moment of course we run the Power 8, Power 9 is in the pipe actually and Power 10 of course is in concept right now just to give you a little bit of impression about that. So now we talked a lot about technology, technical stuff but besides that there is one other thing which is very important very very important trend. What we see more and more is while we have developed a lot of proper stuff over the last 20, 30, 40 years within the last 10 years we see a clear trend towards openness. Of course when I say last 10 years guys like you would laugh and say well already in the 80s we saw Linux but for the hardware this is a little bit different. Hardware had been kind of closed systems I mean there had been not too much cooperation between teams that had been some competing parties but it was more difficult to cooperate. Now as you have seen before if you really want to realize such more solution like systems which for example leverage accelerators then I think the equation that this is done by one company is getting more and more difficult. This is what we realized for a very long time and the consequence actually is that we say well similar what we have seen actually with Linux this could be also done in the hardware space you just have to demonstrate the willingness to open up your architecture actually and maybe also come up with a foundation which allows you to stronger cooperate with other partners and this is pretty much what happened in 2013 when we and other companies like Melanox and like Google and like Nvidia came up with the OpenPower Foundation. The OpenPower Foundation gives a completely new flexible framework which allows you to build hardware in a totally different way than you have done in the past and it's not only about hardware it's also about operating systems, work realization, the ideale applications. Okay now the background story on that is the following. Maybe you remember somewhere somewhere back in 2012 we had run this GeofoD game with our Watson software. I think this was a little bit of step into a new era we had the Deep Blue which was playing chess okay and where we could demonstrate some human capabilities of such a machine and I think GeofoD was a step forward and of course when we run that on the US TV there was a company called Google which was pretty interested in what technology this is since overall when we look into the future I think this what we call cognitive approaches and all that is getting more and more interesting specifically in the research area and of course they also understood that this machine human interface specifically in this case of Watson was solved in a pretty impressive way and so they contacted us and asked what technology is that and in this discussion they also learned that for example the Watson technology is completely realized in power technology and with this cooperation actually which we started with them they came up with some ideas which were very typical for us for this new kids on the block like Google, like Amazon, like Rex Space, like Facebook which out of the sudden had been very well suited players in the IT landscape and their idea was pretty much to tell us well look we have shifted the paradigm we will not buy any boxes from you as I don't know Daimler or Deutsche Bank or here in the German space will do since we build our systems ourselves okay but we can of course work together we are very interested in your technology but this would really mean you have to come up with a legal foundation which allows us to use your technology in the same context as we can do it for example with Intel and our response was okay we got the message and we will do that and then we will go even one step further we will also give the possibility to open up our complete micro architecture of the process and obviously this was the right direction we were hitting this brought us into a POC together with Google and this means meanwhile a big part of their search engine is not running on Intel x86 anymore but is running on power technology but on top of that there was a lot of notion of course about this new approach and now meanwhile we also have some technology partnerships with companies specifically in the Chinese space which come up with their own ideas on how to build a processor based on power 8 technology this means we have one partner right at the moment it's Shizu Power Core Chinese company they have built their own power 8 processor chip background on that of course also was a little bit about the NSA discussion they changed the crypto engine they changed some of the floating point stuff into that and they come up with a now more Chinese power flavored actually power 8 processor and on top of that of course they also start to operate in this open power context and build their own systems with new companies like Neuklaut like Moon so we see a completely open new approach actually to come up with a new server portfolio in a certain sense which of course for the architecture brings in a lot of benefits since the more vital actually we get into that space the more we can leverage all the technologies which are in place we can get more experience about that the ecosystem systems growing by that so very interesting to watch right at the moment so that's the background actually and the idea about open power now of course as I said Google and Facebook Rackspace are companies which are dealing with that with these new capabilities in a very interesting way that just for all you I mean while we are something like more than 200 members all over the world actually technology companies integration companies operating system companies software companies you call it so very interesting to watch interesting approach here there are other open approaches like for example the open compute project which was initiated by Facebook this is more on the system integration side this means they say well we have a very good network worldwide a lot of users why shouldn't we just put us back into the web actually and ask if someone is interested in building a computer actually on based on that spec that's what they started with open compute now this is combined with open power they build a specific messaging system actually just for their purposes and I think it's very interesting to see here how these two things how these two open movements come together same here between Google and Rackspace new thing for us actually is that they already plan for power 9 CPUs power 9 so far is not part of the open power foundation but as you can see as there is so much interest I mean this is one of the next steps that we do the announcements also from the IBM site and so a lot of interest actually in order to use that okay so that's the open power part which allows more flexibility which allows new business models which allows completely new flexible system designs which then are in better shape actually also to leverage all the capabilities such a platform actually can deliver the other thing I mentioned before is also interesting to watch it's not so much about power only but you saw we had this new synaptic chip I mean one big trend to see right at the moment now more from our IBM research perspective is that this extremely high increase of exponential growth data when you look a little bit closer actually this is you have to distinguish between the traditional IT and I think there is some linear growth but if you look at exponential growth you mainly see that the data which are really growing faster what we call data at the edge this means this is our data produced by cameras by sensors in the Internet of Things context smartphones and and and of course this comes defines a little bit different equation since you know to use this data and obviously some very popular players use this very efficiently you have a little bit different paradigms than the past most of these data are not used so far one of the reasons is that they are very transient this means they lose meaning within seconds or so the other part is that you don't have the bandwidth actually to get them into a cloud environment and this means you really have to process them at the edge if you really want to use that okay and this brings us to a field which is very interesting right at the moment to watch more from a research perspective what we call brain inspired systems there you totally go away from a for from for Neumann architecture you build many core chips actually and the cores the small little cores are not designed like floating point units or anything else they are really directly aligned to the brain function this means you have neuron functions and synapse functions are realized in a network I mean there had been a trend in the 80s neural networks this is similar to that but now you have neural networks actually which are realized in a hardware structure and which come up with a way number way higher number actually of new words okay there's a lot of research done around that in addition to that there is a lot of discussion about machine intelligence in common right at the moment so this means how can we get to systems which are capable to learn instead of being programmed which is also a very interesting field and now for the specific case for the neural synaptic space IBM has built a chip so-called true north architecture very high number of transistors when you can run such a chip for certain workloads with an efficiency raise in the area of something like a thousand specifically when you do pattern recognition stuff and at the same time you just consume something like a factor 10,000 less power this means you can use a chip like that in a for example the smartphone and running with a smartphone battery for something like a week without having any problems and for us of course the big question now is how do we get to programming models actually or to utilize it in the best way how do these self learning mechanisms look like actually to transport as a way a little bit from just real programming at the moment I think this is very much focused on research activities but I think in the next 10 to 15 years this will be very interesting to watch since from a vision perspective this could have same capabilities as the combination between for Neumann architecture and CMOS technology the good thing is the CMOS technology is already there so we don't have to invest into that this is all about architecture and understanding how the brain function looks like okay so just wanted to put that on the table at the end but that's pretty much what I try to share today or wanted to share today on hardware are there any questions no okay thank you very much
Next up was Dr. Wolfgang Maier, Director of Hardware Development at IBM, who gave us an overview of IBM's Power line of servers, and how it is the best hardware solution for the Kolab platform thanks to its performance enhancements and its open architecture.
10.5446/54584 (DOI)
Talk about how to buy something that's free. And that's an interesting concept because we all have to deal with it being open source minded people. And there is a lot of misunderstanding in the general corporation and governmental world actually how to handle open source projects. Or even how a business model can exist around open source products. To most, especially procurement officers, buying something that's free makes no sense at all. And therefore it can never work and therefore it can never be bought and therefore you can see where I'm going. But first let me say hi, this is me, I'm Hans de Waard, owner of Open Innovations. Open Innovations is a Dutch partner for Colob Systems, so we do implementations, security consulting. I teach at the University of Applied Science and basically I do whatever I like with which I can find a customer. And we're not alone, we're working together with a couple of other companies and freelancers to provide services. I find it absolutely paramount to have at least one picture of my car in every presentation. And a personal challenge of mine is also to at least say something remotely connected to my actual topic about my car. And the thing about my car is that in essence it is an open source community on four wheels. It has modular parts, it departs or have open prints. If you have a really advanced 3D printer, you can actually print out my car, assemble it and have it working. And the beauty about that is actually my car is not only an open source ecosystem on wheels, it's also an economic ecosystem on wheels. Because thanks to my car, at least three garages in my nearby home vicinity are able to service my car on a regular basis. They all have their specialties and they can work on my car as contrary to modern cars. I used to have a newer Mercedes ML before this and with that car I practically only could go to the Mercedes Benz dealer. Well, which wasn't that bad because they had decent coffee and stuff, but still I like the concept of being independent and open. And the thing about my car also is that I work with computers day in, day out. And I want something to be as reliable as possible and as simple as possible. Therefore, I own a car that doesn't have computers. So whenever somebody sets off an electric motor that it pulls over the Netherlands, my radio won't work but my car will still drive. So what's my background in public procurement? Well, basically I've been advising the government agencies for a number of years now how to handle open standards and open source and basically service type oriented products, how to procure them. And I've been involved in a number of EU tenders, both in defining new requirements and in the event selection process. And I've also had the pleasure to attend at least three court cases surrounding those tenders because eventually you will always piss off somebody and let somebody usually is the party not getting the contract. How strange. At the moment, I'm still involved in that governmental circles, especially through my association with the forum, standardization, which in the Netherlands is the governmental body deciding on which open standards are mature enough for adoption in governmental realms. This actually was the body that in 2004 or 2005 adopted ODF as being the leading document standard in the Netherlands and I was the story's hole who get to implement ODF within the Ministry of Interior and the Ministry of Justice. If any one of you during that period of time experienced the same, I strongly sympathize with you. But still, it was fun. So how does procurement work in general? Usually an it needs to be scratched at some point and organizations tend to solve problems or perceived problems by adding new items into an equation. I rarely come across organizations that have a sense of urgency or perceive a problem and then decide to cut out something. Because apparently in our human psychology, it sounds really good to add stuff to a problem because that implies that you're actively working towards something new or more. That sounds better than taking stuff away from a problem for some reason. And basically that's why procurement will until the end of times be a popular hobby of a lot of governmental organizations in other parties. So you will need some requirements. You will put these requirements out into the market, request any kind of proposal. You will then enter a selection and a decision process. But in general, the most important bit is the fact that you actually have to fulfill such a procedure to acquire a product. And if this whole procedure stands straight to you being open source minded, well it is because being open source minded and using open source software, well basically what we do is we formulate our requirements. Then we go to git.colab.org, we download Colab, install Colab and then we have Colab. Notice what's missing here, the rest of the procedure. So the whole open source thing makes procurement officers a bit itchy because it goes outside of them. And that's exactly what I want to talk about because we need to figure out a way to frame the open source proposition into viable units. Or we need to frame our business model regarding the open source product into a viable unit. Because to a procurement officer, if it doesn't come in a box with women around it, it must not be a product, therefore it must be a service and services are strange. Okay, we can handle that as long as it's man hours or women hours and we charge money for that. But that whole notion is different with open source, of course. And therefore open source is usually also often put in the corner of bespoke software. So bespoke software being usually a risk for your organization's future set as you because it can only be maintained by one party, etc. And usually if this sounds strange to you, well it does, but it is actually a misunderstanding that's going on in this reason. So, procurement starts with an itch and this itch can be something that's missing, something that needs to be replaced, something that needs to be updated, something at all. It can be an organizational demand or whatever, but at some point in almost any IT tender, you will see a couple of principles. Which basically mean whatever we're going to buy must be exactly what we already have. Or at least be so similar to what we already have that we don't actually notice it's there. And for years and years and years, governments and also corporations have explicitly specified that whenever they bought a computer, it should be the Windows platform. Literally in the public procurement requirements Intel possesses Windows operating system period. The European Commission had already in I think it was even 2001 decided that this was an undesirable policy because, well, Fender Lockheed and Yadda Yadda Yadda, I'm sure I don't have to bore you with all these details, but countries went on doing this. Actually, I was in a public tender in 2009. It was the easy 2010 tender for the Dutch government, which was a government wide tender for the national government to basically replace all desktops and servers and printers and data center, etc. And I was liaison for the forest standardization back then, so I had to take a stand to make sure that open standards got in there and competition was built into. And the first requirement that the working group came up with was we need Microsoft Windows and we need Intel. That was in 2008. So between the European Commission deciding something and a national government actually doing it, there can be quite some time. But the whole concept of the tender in general is to make sure that you end up with a product that fits into your infrastructure anyway. And a lot of effort as we put into the contractual phase, which basically means who can sue, who will never go, something goes south. And that's also a bit of an issue with open source licenses in general because they state that the software is free as is without any warranty, etc. And as soon as you show such a line to a procurement officer, well, usually they turn green and they're like, really can be good if they upfront deny all accountability, then it must be rubbish. So that's something you need to work at too. We need to work at the perception. So we need to explain how to actually sell open source because public tenders are and it doesn't matter what they're actually trying to procure, but they will try to buy a bridge, whether it's software, whether it's servers, whether it's men hours or whatever. Basically, all public tender procedures originate from infrastructure projects and most of the procurement officers will approach an IT project as if it were building a road or a bridge. And that's a problem because a software architecture by nature is way more modular and layered and also has different types of interfaces and ongoing services, etc. Then you would have with a piece of road. So actually there's life after the procurement process and this is a real change of attitude for a lot of procurement officers nowadays because instead of only being involved in the first acquisition, they will now also become ongoing contract partners in the ongoing performance of the contract. But modularity is a keyword that's bobbing up quite a lot lately, especially in the Netherlands. We've had a parliamentary inquiry on to why so many government IT projects tend to go over budgets and tend to never finish and tend to not deliver whatever they expected to. And basically what they realized was, well, since we are buying software as if we are buying a bridge, we have an issue because for the bridge is pretty simple. The land is there, the roads is there, the rivers are there, they won't go anywhere. So basically you can take years and years and years to build a bridge, but usually if you don't have earthquakes or floods or other stuff, the connections will remain. But in software, especially process management software, ERP document management, you have at least three factors outside of your IT project which influence the project. It's the IT infrastructure itself, which has a legacy, it has a complexity. It's the organization around the IT department which will differ through time in their demands on a certain project. And also it's the political reality surrounding a project, especially the Dutch parliamentary inquiry. One of the biggest lessons learned, not necessarily lessons well learned and well implemented, but at least they learned, was that the influence of the House of Parliament adding changes to a project during the project realizations effectively meant that no project can ever finish. So what happened then? The modular procurement came in and the idea about modular procurement in open forum Europe has a very nice document regarding the whole concept of modular procurement in IT. It's about interoperability, vendor independence, digital sustainability by open standards, and especially also define an exit strategy before you even start implementing a product. And that's another lesson learned from the parliamentary inquiry in the Netherlands, which basically meant that all we've been doing for the past 20 years in IT was buy into products and have not a single idea if we were brought into the product, what to do whenever the product would be obsolete or go away. The best example for this for instance was a document management system developed by the Ministry of Interior in the Netherlands based on compact worker space at the software, which was a plugin for Microsoft Outlook XP and their whole organization, every signature in the organization, every document proposal or anything had to go through that system. So it was based on Office XP. You want to know when it was implemented? 2008, which was a bit of a problem because by then compact didn't exist anymore and the product worker splitter did not exist anymore and Microsoft's IP was on, well let's just say, terminal life support. So, and that's what happens quite a lot, and especially in the follow-up project after that they've chosen, well not entirely an open source system, built on open standards, namely IBM FileNet, which was actually implemented impressively fast considering that the first expilator project took them seven years and they've implemented the FileNet system in under two years, which actually wasn't from the project, but the thing with also FileNet and the approach was that instead of creating one homogeneous system to rule all the documents, FileNet would be simply a self-containing unit of documents. It wouldn't necessarily integrate with Outlook or any other productivity tool. It would just be a document management system and it would interface with different systems. And this slide was actually, let's just say, I'm not sure if this slide is still relevant tomorrow or if the UK even still exists tomorrow, but let's just say that the UK government does something very right, especially their digital services story. And what's really nice is modular software also enables smaller units of procurement, smaller units of procurement also imply that SMEs can buy into that as well. And one of the core principles of the economic agenda of Europe is stimulating SME participation into performance, into tenders. So whatever you need then is smaller units of procurement. And the UK government has defined a number of domains in which they will seek for partnerships with smaller parties and which they allow smaller parties to actually apply through a marketplace to build such models. It goes beautifully. By now, almost 200 participants among them are a lot of SMEs. The Swedes do the same. A bit different model. They publish microchallenges, micro tenders in four domains. And they have outlines as base principles that they require open standards and they require proven interoperability with existing applications through open standards. And they will actually demand also when developing bespoke software that the software intellectual property is then transferred to the Swedish government so that it can be reused within Swedish governments. So it's not 100% open source yet. But at least we now have reusable components from taxpayers money, which is a plus. The US government, which I personally find a really interesting development because the US for me was always a bit analogous to or synonymous to the big software houses, the user suspects. And therefore, well, I would not have expected them to adopt such a policy. But basically, but the Department of Homeland Security got up with a couple of years ago and by now it's the standard for IT development and IT procurement in the US government is basically what they said was, we throw away waterfall and we now adopt that down and we want modeler reusable units. So that's interesting. It's actually a readable memo as well. You might even take a look at it. And in the Netherlands, we also even in the Netherlands, I even would say because traditionally we would really suck at EU tenders. We would make them as big as possible, including the kitchen sink and everything. But by now, yeah, actually, the book in there is there for a reason because we would prefer anything as be it the Delta works, which are interesting stuff to look at, by the way, if you ever in the Netherlands recommend that. But Piano, which is the procurement expertise center for this government came up with a guide called micro per se micro challenges, which basically is a guideline for cutting up big tenders into smaller modular units. And they're now actually advocating downsizing the procurement projects instead of advocating bigger projects. It's an interesting paradigm change. And even in the Dutch Arbis, which is the government guidelines for IT procurement, we have a couple of articles that actually, if you read those well, the obvious conclusion would be open source fits perfectly. Right. So but there is one thing that open source is or at least open source projects are quite bad at. And that's about, let's just say, anything outside of software development. There's a lot of interest and attention in recent times going to user experience and user user friendliness and co lab is actually spearheading that by making all their products have a very consistent look and feel and thinking about usability quite a lot. And we need more, especially with regards to the adoption of open source in these markets, because all these needs markets have different compliance models and guidelines. And the thing is that actually they're not that different. Basically, what they all want you to do is adopt a risk management strategy and make sure that in your system, you have some way of accountability who done what at which time and who can tell me why that was a good idea. So traditional vendors, they simply state that they are compliant to all these different schemes, which in fact they aren't. And that's the nice thing about it, because if you actually read the compliance statements from Microsoft, Amazon and Google, and I can go on quite a bit and actually I did read them. You will not find a single concrete promise in there at all. It's all could be should be would be might be in a nutshell, but it means beautifully. I mean, all that a procurement officer will read from this. I have to work if I PS compliant in there, it must be good, etc, etc. So what should be you, we should do the same. Open source project need to think about how to promote themselves in these branches and it's actually not that hard. It's just a different set of documentation. It's just a different set of promotion tools, just a different way to enter a market. So open source traditionally open source project traditionally are fairly inward focused based on their functionality. And in their community, they're very open and transparent. But by themselves, communities are a bit like world gardens. We go outside there. And we also need to promote model of procurement. I mean, who's a business owner here? I am and I see a couple more. Excellent. So who works for business, the rest probably help make this happen. Go to governmental networking events regarding tenders because they are there and actively ask for modules of development because strange enough, governments and especially procurement offices of government now knowing that modular development and modular procurement is on the agenda. They actually want to learn, but you're the ones having to teach them in that aspect. I've been doing that for this government for years by now. I've been getting workshops all over the place in how to do modular procurement. And the best thing is that getting those workshops has become a business model for me as well because now I repeat to do that. Sweet. So and adopt at least some basic outline of risk management strategy in your open source projects policies because that's important as well. And you should at least be aware of that those things are required when participating in governmental tenders and these has some quite readable documents on that. And also we have the OSST and methodology regarding risk management, which gives a very clear indication of how you should approach risk management in general. So it's also visible, but in general, make sure that the world knows that you're doing this. So what do we usually do? We create by software. But we lack formal structure in organization, and we need to work on that. And we need to describe our government structure and we need to describe how things work. We need to be able to point to a set of documentation and information that a procurement officer can read and learn to understand the business model. So how do you buy something that's free? Not by itself, but you can buy the business model around it and you can make money from it. And we should. We make white software. We should make money out of it. And we should battle the issues and misconceptions that arise whenever the word free comes up because free usually is not associated with freedom. In procurement, free is usually associated with worthless. And freedom is usually associated with energy, etc., etc. And we have to explain that message better because it is the economy and we need to start adopting models that make us be able to sell our great products. And there's some excellent examples of companies who do that, where that co-laps user. For instance, they all have a premium enterprise support model and at worst it's fine. There are different models as well. You have OpenStack and Drupal, which basically is an array of organizations collaborating on a certain project and providing services around the project. Take a look at them and learn from them. Because governments really want to buy open source. But in that sentence is a contradiction in turn because you can't buy something that's free. But they can buy into the concept through our companies and we should make them able to do so. Any questions? Go and sell open source.
The business side of things was covered by Hans de Raad, an independent ICT specialist, founder of OpenNovations, and a long-time Kolab partner and friend. Hans explained how it is possible to grow a business around Open Source software despite the naysayers, how to find your niche market, and the intricacies of public procurements.
10.5446/54585 (DOI)
Well, I mean, I propose we just turn the Q&A into an open discussion actually. Does it work? Excellent. All right. Questions, comments, thoughts, input. I think we'll need to use the microphone for the value of this stream. Someone is coming. Yeah, all right. So let's not stand in front of the speaker because that will entice interesting surround effects. My comment actually is that being one of the people who organized last year's Collab Swim is, first of all, I'm very happy to see you guys putting this off this year. So thanks for that. And also, I think it's a good thing to be more visible as you're doing with the Collab Tasters and in the Open Power Foundation. And it's also a very targeted visibility, I guess. So you're profiling yourself as a full stack technology partner, not just the group of news you're fulfilling. And in that aspect, since you're currently also looking at the hardware side and developing the software side yourself, what are your insights on other open source platforms you might want to connect to in the future? I mean, you've done a lot of work around here, done a lot of work with the KG community. You're currently also doing a lot of stuff yourself. You are in Cuban, some other stuff. But you see that going. Do you see more collaboration there? Well, I mean, we, as you know, now it's getting interesting. The audio engineer. So yes, thank you for the comments and the questions. Indeed, we've deliberately started to work more professionally, also in the way we position ourselves and of course communicate. And that's a good thing. Because while Geeks may not be overly enthused always by design, although personally, to be honest, I always was. But it is the only way that you can ever really reach a large audience by doing this professional. I mean, that level also must be professional. And we had the huge fortunate situation that we could win Giles over to do our creative director. In fact, I remember that Giles approached me at the last call up summit in The Hague with the thought of maybe doing this full time for us. So it's like summit to summit improvement in a way. And of course, we were ecstatic to have him with us because his work is all over the taster format. It's all of our corporate design. It's in the way that you see our visuals now, our materials for partners to develop. All of that is his work. Right. I mean, he is the person leading that. And given that he was managing director of a creative digital agency in London that he had co-founded and was running, it was very successful, one of the celebrated agencies in London, in fact, and decided to leave that behind to join call up and push the call up story with us. For me, that was an incredibly moving moment, actually. I mean, I was humbled, to be honest. On the technical side, of course, we will continue to work with others. I mean, we work with the Libra office now a lot. There's a couple of others. I am sure we will work with, although on the technical side, I mean, Aaron's the guy who makes that call, right, together with Jeroen. These two guys, when they put their heads together, magic things happen. I get informed of the results. So, but yeah, I mean, the only way we actually do this is by engaging with upstream, by doing it in a collaboration. For us, upstream is always king if it's not going into the upstream is a liability. Simple as that. So, we want the upstream. I mean, we're working very, very hard to convince the upstream. In fact, if you follow how Cube has come about, we were working extremely hard over a long period of time to show these are the shortcomings we need to address to help people understand why certain approaches were not sustainable if you want to run this in a professional environment, why certain things had to be addressed. We gave presentations about that. We had long meetings about that. I mean, there were multiple meetings of the KD PIM community through which we actually communicated all of this. And then we ultimately said, all right, this is the way it needs to go. And we'll do it now. Please everyone who wants to join, join in. But we always try to do things in the most inclusive way possible. We take a lot of extra effort and work often to do it the right way. Because for us, the goal needs to be that the entire community gets stronger. We cannot fragment our community. Splitting away very often seems like the easiest solution. Forking, a lot of people think, oh, let's fork this. I don't like how this goes. Let's fork it. And sometimes that may be necessary. There may be moments when there is no other alternative. But at the same time, of course, fragmentation also has a cost. So we try to reduce the fragmentation to not just create yet another branch on something, but work with the community, make it upstream. And then build a stronger community as the result. So yes, that's very much an attitude you will see from us in the future as well. Aaron, anything you want to add, Jeroen? Go for it. So part of what we do now is be the glue between a lot of components. This makes us currently maintain stuff and write stuff in seven languages, Aaron? Seven languages. This is overly cumbersome. I speak most of them. And sometimes they work in C on something that is Cyrus IMAP to need to happen to resolve actual issues just quite costly. It doesn't allow me to learn and experience C at a pace that actually makes me a proper good C developer. And the same goes for Python. And we have Erlang. And we have PHP. And we have C++. And so it becomes overly difficult. Part of what you'll be seeing online will shift that to we have set Elixir, Phoenix and Elixir as a web development framework as the standard to start prototyping in things. So you'll see a bunch of those prototypes, maybe screen casts or what we otherwise do in agile development retrospectives. These are weekly occurrences. We think we have something that is sufficiently visually appealing and not embarrassing. We'll post that online. The source code will be online. So you could look it up yourself and laugh at us. But a lot of prototyping. So new IDs that we can whip up something for in two to four hours maybe and then decide whether or not that's indeed what we want and what we would like to see and how it should work. And then leather rinse repeat. Do it again. Different ID. Prototype that. So yep. Thank you. And by the way, just to point it out, git.collab.org is where you want to have a look. That is where we go for our own sprint planning, developments and so on and so forth. So I mean, it's an open system, right? So everyone is welcome to make an account there and get active. You can be part of this with zero burden. I mean, and you're going to be working in the exact same system as us. There's no second system, right? We're not having a secret internal system where we also do this. No, no. That's the system. I think if I stand here, it starts singing. Who's next? Hello. This is my chance. So I wanted to ask, can junior collaborate or help to collaborate by having few skills? Well, I am asking the level of knowledge. Can be at junior level or? Oh, yeah. I mean, we have views for people at every level. I mean, we generally follow a culture of giving people things to tackle that they feel that they can tackle. And then if we see that they need help, we help them. But often people are also capable of doing more than they think they are. So we try to work with people at our levels, right? And yes, a senior person might be able to do certain tasks a lot faster or better. But there's always things that even at junior level, you can do. In fact, I believe we need a lot more junior people to get involved because we will turn to senior people at some point. And so we have plenty of things that can get done. So you are very much encouraged to get involved. Thanks. And I mean, just hit the mailing list or the IRC channel or whatever. Just approach us. And we are generally pretty approachable, I hope. I mean, most of us tend to be usually quite friendly. I haven't seen anyone bite anyone else yet. So yes, I mean, he's Dutch. You got to forgive him. He's blunt, but he means well. I mean, with him, you never have a doubt of what he's actually thinking, which has its own merit. But once you get to know him, he's actually a very sweet character. He just hides it very well. All right. Who else? Any more comments? Any more thoughts? Sorry, what? An artistic moment. An artistic moment. Well, that would be Aaron's thing. Aaron went out to Numbag last night to do karaoke. So actually, Peter, you went with him. So. I had the hallelujah with Aaron. And he would give you a red tape. So it's uncomfortable. Yeah, I mean, one of the organizational principles from day one has been to actually have fun. Because I mean, while we want to save the world, you know, we don't want to end up as sour bitter people at the end of it. So, you know, we need to somehow also have a little bit of fun. Therefore, it's a, I hope at least it's an interesting place to work. So anyway, and by the way, we are always looking for people who also want to work full time for us or part time, right? I mean, don't hesitate to send us your CV. Apply please, because we're constantly on the lookout for people. And especially good people, of course, are always sought by anyone. We realize that. But we have a pretty cool team. I mean, working with the likes of, you know, Aaron and Jeroen has its merits. They have a lot of insight to share. And even people who joined us as junior developers, such as Christian, have meanwhile taken on a lot of responsibility, including management responsibilities by now. So we believe in people and we believe in actually allowing people to grow and helping them to grow into new roles. I mean, Lisa, she's been joining us, you know, in the Colab Now support realm and is now, you know, starting to organize the support as well as help us with the conference organization. I mean, the call of summit here was to a very large extent her work. So thank you very much for that, Lisa. So yes, we want people to grow. And if you want to grow with us, apply please. I think that concludes it. And Lisa is giving me the almighty nod of conclusion, so yeah, that again. I know. All right. In that case, thank you all very much. Enjoy the rest of the afternoon. Hang around. Cheers for Switzerland, accept the ones from Poland. They're excused. We forgive you, Alec. Yes, yes, yes. And if you still want one of those big bottles, we will at some point, these beautiful red ones, right? Line up on conference call summit Colab Ark and get your red bottle because we will otherwise be taking them back with us. So make sure that we have very little to no place. All right. Thank you very much.
Join us this June 24-25 in Nürnberg, Germany for the second annual Kolab Summit. Like last year's summit, we've accepted the invitation of the openSUSE Community to co-locate the summit with the openSUSE Conference, which will be held June 22-26 in the same location. And because we have some special news to share and celebrate, we're also putting on a special edition Kolab Taster on Friday June 24th. The overarching theme for this year's summit will be how to put the freedom back into the cloud.